Descrizione del progetto
Come il cervello localizza i suoni
La capacità di stabilire da dove provenga un suono può risultare fondamentale per la sopravvivenza. L’udito spaziale è una capacità utilizzata per localizzare l’origine di un suono e di frequente riguarda il filtraggio del rumore di fondo. Capire come il cervello riesca a stabilire la posizione di suoni complessi, tra cui la voce di una persona in una strada trafficata risulta importante per il trattamento della perdita di udito che colpisce oltre 34 milioni di cittadini europei. Il progetto SOLOC, finanziato dall’UE, intende combinare la modellizzazione computazionale (reti neurali profonde) con le neuroscienze e l’audiologia clinica d’avanguardia per esaminare i meccanismi del cervello su cui si fonda la localizzazione dei suoni. Il progetto fonde insieme le neuroscienze, la modellizzazione computazionale e l’audiologia clinica.
Obiettivo
With the rise of urbanization, silence has become a rarity. Sound is all around us, and our hearing skills are essential in everyday life. Spatial hearing is one of these skills: We use sound localization to determine where something is happening in our surroundings, or to ‘zoom in’ on a friend’s voice and filter out the noise background in the bar. But how does the brain compute the location of real-life, complex sounds such as a voice? Knowledge of these neural computational mechanisms is crucial to develop remedies for when spatial hearing fails, such as in hearing loss (>34 million EU citizens). Hearing impaired (HI) listeners experience great difficulties with understanding speech in everyday, noisy environments despite the use of an assistive hearing device like a cochlear implant (CI). Their difficulties are partially caused by reduced spatial hearing, which hampers filtering out a specific sound such as a voice based on its position. The resulting communication problems impact personal wellbeing as well as the economy (e.g. higher unemployment rates). In SOLOC, I use an innovative, intersectional approach combining cutting-edge computational modelling (deep neural networks) with state-of-the-art neuroscience and clinical audiology to gain insight into the brain mechanisms underpinning sound localization. Using this knowledge, I explore signal processing strategies for CIs that boost spatial encoding in the brain to improve speech-in-noise understanding. Through this Global Fellowship, I connect the unique computational expertise of Prof. Mesgarani (Columbia University) and his experience with translating computational neuroscience into clinical applications, to the exceptional medical expertise on hearing loss and CIs of Prof. Kremer (Maastricht University). Hence, by implementing SOLOC I will diversify myself into a multidisciplinary, independent researcher operating at the interface of neuroscience, computational modelling, and clinical audiology.
Campo scientifico
CORDIS classifica i progetti con EuroSciVoc, una tassonomia multilingue dei campi scientifici, attraverso un processo semi-automatico basato su tecniche NLP.
CORDIS classifica i progetti con EuroSciVoc, una tassonomia multilingue dei campi scientifici, attraverso un processo semi-automatico basato su tecniche NLP.
- engineering and technologyelectrical engineering, electronic engineering, information engineeringelectronic engineeringsignal processing
- natural sciencesbiological sciencesneurobiologycomputational neuroscience
- social sciencessociologysocial issuesunemployment
- natural sciencescomputer and information sciencesartificial intelligencecomputational intelligence
Parole chiave
Programma(i)
Argomento(i)
Meccanismo di finanziamento
MSCA-IF - Marie Skłodowska-Curie Individual Fellowships (IF)Coordinatore
6525 XZ Nijmegen
Paesi Bassi