Neural networks for speech separation for binaural hearing aids

  • Authors:
  • Cosme Llerena-Aguilar;Roberto Gil-Pita;David Ayllón

  • Affiliations:
  • University of Alcala, Signal Theory and Communications, Spain;University of Alcala, Signal Theory and Communications, Spain;University of Alcala, Signal Theory and Communications, Spain

  • Venue:
  • GAVTASC'11 Proceedings of the 11th WSEAS international conference on Signal processing, computational geometry and artificial vision, and Proceedings of the 11th WSEAS international conference on Systems theory and scientific computation
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper deals with the use of neural networks for separating speech from other noisy sources in binaural hearing aids. In sound separation systems implemented in binaural hearing aids, the right and left hearing aids need to transmit to each other some parameters involved in the speech separation algorithm. The problem is that this transmission reduces the battery life, which is one of the most important constrain for designing advanced algorithms in hearing aids. In order to solve this problem, we use an adequate number of quantization bits for the parameters to be transmitted from one hearing aid to the other, trying to keep the performance as good as possible by means of a neural network, in the effort of finding a balance between low bit rate (and thus, low power consumption) and good separation of speech.