QLS Seminar - Josh McDermott
New Models of Human Hearing via Machine Learning
Josh McDermott, MIT
Tuesday February 22, 12-1pm
Zoom Link:Â
´¡²ú²õ³Ù°ù²¹³¦³Ù:ÌýHumans derive an enormous amount of information about the world from sound. This talk will describe our recent efforts to leverage contemporary machine learning to build neural network models of our auditory abilities and their instantiation in the brain. Such models have enabled a qualitative step forward in our ability to account for real-world auditory behavior and illuminate function within auditory cortex. But they also exhibit substantial discrepancies with human perceptual systems that we are currently trying to understand and eliminate.
Bio: Josh McDermott studies sound and hearing in the Department of Brain and Cognitive Sciences at MIT, where he heads the Laboratory for Computational Audition. His research addresses human and machine audition using tools from engineering, neuroscience, and psychology. McDermott obtained a BA in Brain and Cognitive Science from Harvard, an MPhil in Computational Neuroscience from University College London, a PhD in Brain and Cognitive Science from MIT, and postdoctoral training in psychoacoustics at the University of Minnesota and in computational neuroscience at NYU. He is the recipient of a Marshall Scholarship, a McDonnell Scholar Award, an NSF CAREER Award, a Troland Research Award, and the BCS Award for Excellence in Undergraduate Advising.