Two-layer automatic sound classification system for conversation enhancement in hearing aids

  • Authors:
  • Enrique Alexandre;Lucas Cuadra;Lorena Á/lvarez;Manuel Rosa-Zurera;Francisco Ló/pez-Ferreras

  • Affiliations:
  • (Correspd. Tel.: +34 91 885 6727/ Fax: +34 91 885 6699/ E-mail: enrique.alexandre@uah.es) Department of Signal Theory and Communications, University of Alcalá/, 28805 - Alcalá/ de Henares, ...;Department of Signal Theory and Communications, University of Alcalá/, 28805 - Alcalá/ de Henares, Madrid, Spain;Department of Signal Theory and Communications, University of Alcalá/, 28805 - Alcalá/ de Henares, Madrid, Spain;Department of Signal Theory and Communications, University of Alcalá/, 28805 - Alcalá/ de Henares, Madrid, Spain;Department of Signal Theory and Communications, University of Alcalá/, 28805 - Alcalá/ de Henares, Madrid, Spain

  • Venue:
  • Integrated Computer-Aided Engineering
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper focuses on the development of an automatic sound classifier for digital hearing aids that aims to enhance the listening comprehension when the user goes from a sound environment to another different one. The approach consists in dividing the classifying algorithm into two layers that make use of two-class algorithms that work more efficiently: the input signal discriminated by the first layer into either speech or non-speech is ulteriorly classified more specifically depending on whether the user is in a conversation (both in quiet or in the presence of background noise) or in a noisy ambient in the absent of speech. The system results in having four classes, labeled speech in quiet, speech in noise, stationary noisy environments (for instance, an aircraft cabin), and non-stationary noisy environments. The combination of classifiers that has been found to be more successful in terms of probability of correct classification consists of a system that makes use of Multilayer Perceptrons for those classification tasks in which speech is involved, and a Fisher Linear Discrimnant for distinguising stationary noisy environments from the non-stationary ones. The system performance has been found to be higher than that of other more classical approaches, and even superior than that of our preliminary work.