Foreground auditory scene analysis for hearing aids

  • Authors:
  • Marie A. Roch;Richard R. Hurtig;Tong Huang;Jing Liu;Sonia M. Arteaga

  • Affiliations:
  • Department of Computer Science, San Diego State University, 5500 Campanile Drive, San Diego, CA 92182-7720, United States;Department of Speech Pathology and Audiology, The University of Iowa, 119 SHC, Iowa City, IA 52242, United States;Department of Computer Science, San Diego State University, 5500 Campanile Drive, San Diego, CA 92182-7720, United States;Department of Computer Science, San Diego State University, 5500 Campanile Drive, San Diego, CA 92182-7720, United States;Department of Computer Science, San Diego State University, 5500 Campanile Drive, San Diego, CA 92182-7720, United States

  • Venue:
  • Pattern Recognition Letters
  • Year:
  • 2007

Quantified Score

Hi-index 0.10

Visualization

Abstract

Although a wide variety of signal enhancement algorithms are available for hearing aids, selection and parameterization of the best algorithm at any given time is highly dependent upon the environment of the hearing aid user. The use of auditory scene analysis has been proposed by several groups to categorize the background noise. In this work, an algorithm is proposed to categorize a foreground speaker as opposed to the background noise and parameterize a frequency-based compression algorithm which has been previously shown to improve speech understanding for some individuals with severe sensorineural hearing loss in the 2-3kHz range.