This so-called cocktail party problem refers to the feat facing humans, animals and machines alike when they process sounds in presence of competing sources, distortions and changing soundscapes. The research tackles this problem using a systems approach with 2 foci: (1) understand how the brain deals with complex soundscapes, (2) develop technologies that can emulate this faculty as intelligently as biology does. Her research reverse engineers how the brain processes sounds to develop robust sound technologies and computational model of auditory perception. Her research bridges the gap between neuroscience and audio technologies by examining the computational and neural bases of sound and speech perception and behavior in complex acoustic environments. Her multidisciplinary research is creating a number of insights into brain sciences, adaptive signal processing, audio technologies, and medical systems, including devising new diagnosis technologies that leverage body sounds to tackle public health problems, such as pneumonia, that affect millions worldwide.
Mounya Elhilali, PhD
Professor
Specialization: Computational neuroscience, auditory processing, sound technologies
Contact
Laboratory for Computational and Audio Perception
Electrical and Computer Engineering The Johns Hopkins University
3400 North Charles Street Barton Hall Room 323
Baltimore, Maryland 21218 USA
(410) 516-5577
The overarching research goal of Dr. Elhilali’s lab is to understand intelligent processing of sounds in challenging listening environments.