Skip to main content
European Commission logo
español español
CORDIS - Resultados de investigaciones de la UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary
Contenido archivado el 2024-06-18

Development of a unified speech processing strategy for combined electric and acoustic auditory stimulation

Final Report Summary - BSPS (Development of a unified speech processing strategy for combined electric and acoustic auditory stimulation)

Cochlear implants (CIs) can restore speech perception in deaf subjects by electrical stimulation of the auditory nerve. Because CI users sometimes perform better than severely hearing impaired subjects using hearing aids (HAs), implantation criteria are changing, leading to a steeply growing population of patients with a CI in one ear and residual hearing in the other. When the residual hearing is used together with the CI (usually using a HA), this is called bimodal stimulation.

The additional use of residual hearing has been shown to slightly improve speech perception in noise, localisation performance, pitch perception and music appreciation, compared to the situation with only a CI. In clinical practice and in these studies, separate CI and HA devices are used. In clinical practice and in most studies, no effort was made to synchronise devices, preserve binaural cues or to adapt the processing in the speech processor to take into account the contralateral acoustic input. There is currently no unified bimodal speech processing strategy, which leads to suboptimal performance.

Our main objective is the improvement for bimodal listeners of pitch perception and music appreciation, sound source localisation, and speech recognition by the development of a true bimodal speech processing strategy. So far, we have developed two different sound processing strategies: modulation enhancement (MEnS) and loudness-model-based processing (SCORE).

MEnS detects peaks in the temporal envelope of the filtered acoustic signal and modulates the channels of the electric signal synchronously with the enhanced envelope. In experiments with six bimodal listeners we have found that this improves the perception of interaural time differences (ITDs) in vowels, and enables the subjects to lateralise sounds based solely on ITDs.

As it does not seem possible to normalise loudness in bimodal devices by setting parameters of current commercial devices, we developed new signal processing, called SCORE bimodal. SCORE estimates the loudness of the signals at the microphones of the two devices, as would be perceived by a normal-hearing listener, and adjusts the loudness of the electric and acoustic signals at the end of the processing chain to give the bimodal listener the same loudness percept. This should solve the loudness balance issues of current clinical devices and thus improve wearing comfort, speech perception and sound source localisation based on interaural level differences. In experiments with six bimodal listeners we found that on average SCORE significantly improved binaural balance by 59 %. We also evaluated speech perception in quiet and in noise through SCORE and found no significant effect on speech in noise, but a significant improvement in phoneme score for speech in quiet, compared to a condition without application of SCORE.

We expect that we will be able to show improved perception of both interaural level and time differences with application of respectively SCORE and MEnS, which should lead to improved sound source localisation. The application of MEnS should lead to improved rate pitch perception and possibly to binaural unmasking of speech in noise. We also expect that the application of SCORE will lead to increased wearing comfort of bimodal devices. We believe that this is the first sound processing strategy and fitting method developed specifically for bimodal stimulation. As it is estimated that currently more than half of the newly implanted patients have contralateral residual hearing, our sound processing could become widely used.