Application of neural networks to speech/music/noise classification in digital hearing aids

  • Authors:
  • Lorena Álvarez;Cosme Llerena;Enrique Alexandre

  • Affiliations:
  • University of Alcalá, Polytechnic School, Madrid, Spain;University of Alcalá, Polytechnic School, Madrid, Spain;University of Alcalá, Polytechnic School, Madrid, Spain

  • Venue:
  • GAVTASC'11 Proceedings of the 11th WSEAS international conference on Signal processing, computational geometry and artificial vision, and Proceedings of the 11th WSEAS international conference on Systems theory and scientific computation
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper focuses on the development of an automatic sound classifier embedded in a digital hearing aid aiming at enhancing the listening comprehension when the user goes from a sound environment to another different one. The approach we propose in this paper consists in using a neural network-(NN-) based sound classifier that aims to classify the input sound signal among speech, music or noise. The key reason that has compelled us to choose the NN-based approach is that neural networks are able to learn from appropriate training pattern sets, and properly classify other patterns that have never been found before. This ultimately leads to very good results in terms of higher percentage of correct classification when compared to those from other popular algorithms, such as, for instance, the k-nearest neighbor (k-NN) or mean square error (MSE) classifier, as clearly shown in the results obtained in this paper.