This so-called cocktail party problem refers to the feat facing humans, animals and machines alike when they process sounds in presence of competing sources, distortions and changing soundscapes. The research tackles this problem using a systems approach with 2 foci: (1) understand how the brain deals with complex soundscapes, (2) develop technologies that can emulate this faculty as intelligently as biology does. Her research reverse engineers how the brain processes sounds to develop robust sound technologies and computational model of auditory perception. Her research bridges the gap between neuroscience and audio technologies by examining the computational and neural bases of sound and speech perception and behavior in complex acoustic environments. Her multidisciplinary research is creating a number of insights into brain sciences, adaptive signal processing, audio technologies, and medical systems, including devising new diagnosis technologies that leverage body sounds to tackle public health problems, such as pneumonia, that affect millions worldwide.
Specialization: Computational neuroscience, auditory processing, sound technologies
The overarching research goal of Dr. Elhilali’s lab is to understand intelligent processing of sounds in challenging listening environments.
To advance neuroscience discovery by uniting neuroscience, engineering and computational data science to understand the structure and function of the brain.