Project description
How the brain localises sounds
The ability to determine where a sound is coming from can be crucial for survival. Spatial hearing is a skill used to locate the source of a sound and often involves filtering out background noise. Figuring out how the brain is able to determine the location of complex sounds like someone’s voice on a busy street is important for treating hearing loss, which affects more than 34 million EU citizens. The EU-funded SOLOC project will combine computational modelling (deep neural networks) with state-of-the-art neuroscience and clinical audiology to study the brain mechanisms underpinning sound localisation. The project brings together neuroscience, computational modelling and clinical audiology.
Objective
With the rise of urbanization, silence has become a rarity. Sound is all around us, and our hearing skills are essential in everyday life. Spatial hearing is one of these skills: We use sound localization to determine where something is happening in our surroundings, or to ‘zoom in’ on a friend’s voice and filter out the noise background in the bar. But how does the brain compute the location of real-life, complex sounds such as a voice? Knowledge of these neural computational mechanisms is crucial to develop remedies for when spatial hearing fails, such as in hearing loss (>34 million EU citizens). Hearing impaired (HI) listeners experience great difficulties with understanding speech in everyday, noisy environments despite the use of an assistive hearing device like a cochlear implant (CI). Their difficulties are partially caused by reduced spatial hearing, which hampers filtering out a specific sound such as a voice based on its position. The resulting communication problems impact personal wellbeing as well as the economy (e.g. higher unemployment rates). In SOLOC, I use an innovative, intersectional approach combining cutting-edge computational modelling (deep neural networks) with state-of-the-art neuroscience and clinical audiology to gain insight into the brain mechanisms underpinning sound localization. Using this knowledge, I explore signal processing strategies for CIs that boost spatial encoding in the brain to improve speech-in-noise understanding. Through this Global Fellowship, I connect the unique computational expertise of Prof. Mesgarani (Columbia University) and his experience with translating computational neuroscience into clinical applications, to the exceptional medical expertise on hearing loss and CIs of Prof. Kremer (Maastricht University). Hence, by implementing SOLOC I will diversify myself into a multidisciplinary, independent researcher operating at the interface of neuroscience, computational modelling, and clinical audiology.
Fields of science
CORDIS classifies projects with EuroSciVoc, a multilingual taxonomy of fields of science, through a semi-automatic process based on NLP techniques.
CORDIS classifies projects with EuroSciVoc, a multilingual taxonomy of fields of science, through a semi-automatic process based on NLP techniques.
- engineering and technologyelectrical engineering, electronic engineering, information engineeringelectronic engineeringsignal processing
- natural sciencesbiological sciencesneurobiologycomputational neuroscience
- social sciencessociologysocial issuesunemployment
- natural sciencescomputer and information sciencesartificial intelligencecomputational intelligence
Keywords
Programme(s)
Funding Scheme
MSCA-IF - Marie Skłodowska-Curie Individual Fellowships (IF)Coordinator
6525 XZ Nijmegen
Netherlands