Descripción del proyecto
Cómo localiza los sonidos el encéfalo
La capacidad para determinar de dónde viene un sonido puede ser crucial para la supervivencia. La audición espacial es una capacidad empleada para localizar la fuente de un sonido y, a menudo, implica filtrar el sonido de fondo. Descubrir cómo el encéfalo es capaz de determinar la ubicación de sonidos complejos, como la voz de alguien en mitad de una calle ajetreada, es importante para tratar la pérdida auditiva, que afecta a más de 34 millones de ciudadanos de la Unión Europea. El proyecto financiado con fondos europeos SOLOC combinará la modelización computacional (redes neuronales profundas) con modernas técnicas de audiología clínica y neurociencia para estudiar los mecanismos encefálicos en los que se basa la localización del sonido. El proyecto reúne la neurociencia, la modelización computacional y la audiología clínica.
Objetivo
With the rise of urbanization, silence has become a rarity. Sound is all around us, and our hearing skills are essential in everyday life. Spatial hearing is one of these skills: We use sound localization to determine where something is happening in our surroundings, or to ‘zoom in’ on a friend’s voice and filter out the noise background in the bar. But how does the brain compute the location of real-life, complex sounds such as a voice? Knowledge of these neural computational mechanisms is crucial to develop remedies for when spatial hearing fails, such as in hearing loss (>34 million EU citizens). Hearing impaired (HI) listeners experience great difficulties with understanding speech in everyday, noisy environments despite the use of an assistive hearing device like a cochlear implant (CI). Their difficulties are partially caused by reduced spatial hearing, which hampers filtering out a specific sound such as a voice based on its position. The resulting communication problems impact personal wellbeing as well as the economy (e.g. higher unemployment rates). In SOLOC, I use an innovative, intersectional approach combining cutting-edge computational modelling (deep neural networks) with state-of-the-art neuroscience and clinical audiology to gain insight into the brain mechanisms underpinning sound localization. Using this knowledge, I explore signal processing strategies for CIs that boost spatial encoding in the brain to improve speech-in-noise understanding. Through this Global Fellowship, I connect the unique computational expertise of Prof. Mesgarani (Columbia University) and his experience with translating computational neuroscience into clinical applications, to the exceptional medical expertise on hearing loss and CIs of Prof. Kremer (Maastricht University). Hence, by implementing SOLOC I will diversify myself into a multidisciplinary, independent researcher operating at the interface of neuroscience, computational modelling, and clinical audiology.
Ámbito científico
CORDIS clasifica los proyectos con EuroSciVoc, una taxonomía plurilingüe de ámbitos científicos, mediante un proceso semiautomático basado en técnicas de procesamiento del lenguaje natural.
CORDIS clasifica los proyectos con EuroSciVoc, una taxonomía plurilingüe de ámbitos científicos, mediante un proceso semiautomático basado en técnicas de procesamiento del lenguaje natural.
- engineering and technologyelectrical engineering, electronic engineering, information engineeringelectronic engineeringsignal processing
- natural sciencesbiological sciencesneurobiologycomputational neuroscience
- social sciencessociologysocial issuesunemployment
- natural sciencescomputer and information sciencesartificial intelligencecomputational intelligence
Palabras clave
Programa(s)
Régimen de financiación
MSCA-IF - Marie Skłodowska-Curie Individual Fellowships (IF)Coordinador
6525 XZ Nijmegen
Países Bajos