Simon Doclo1, Sharon Gannot2, Marc Moonen3, and Ann Spriet3
1 University of Oldenburg, Signal Processing Group, Oldenburg, Germany
2 Bar-Ilan University, School of Engineering, Ramat-Gan, Israel
3 Katholieke Universiteit Leuven, Dept. of Electrical Engineering, Leuven, Belgium
Noise reduction algorithms in hearing aids are crucial for hearing-impaired persons to improve speech intelligibility in background noise (e.g., traffic, cocktail party situation). Many hearing aids currently have more than one microphone, enabling the use of multimicrophone speech enhancement algorithms . In comparison with single-microphone algorithms, which can only use spectral and temporal information, multimicrophone algorithms can additionally exploit the spatial information of the sound sources. This generally results in a higher performance, especially when the speech and the noise sources are spatially separated.
Since many hearing impaired have a hearing loss at both ears, they are fitted with a hearing aid at each ear. In a so-called bilateral system, no cooperation between the hearing aids takes place. Current noise reduction algorithms in bilateral hearing aids are not designed to preserve the binaural localization cues, that is, the interaural time difference (ITD) and the interaural level difference (ILD) . These ...