A computational auditory scene analysis-enhanced beamforming approach for sound source separation

  • Authors:
  • L. A. Drake;J. C. Rutledge;J. Zhang;A. Katsaggelos

  • Affiliations:
  • JunTech Inc., Shorewood, WI;Computer Science and Electrical Engineering Department, University of Maryland, Baltimore County, Baltimore, MD;Electrical Engineering and Computer Science Department, University of Wisconsin-Milwaukee, Milwaukee, WI;Department of Electrical Engineering and Computer Science, Northwestern University, Evanston, IL

  • Venue:
  • EURASIP Journal on Advances in Signal Processing - Special issue on digital signal processing for hearing instruments
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Hearing aid users have difficulty hearing target signals, such as speech, in the presence of competing signals or noise. Most solutions proposed to date enhance or extract target signals from background noise and interference based on either location attributes or source attributes. Location attributes typically involve arrival angles at a microphone array. Source attributes include characteristics that are specific to a signal, such as fundamental frequency, or statistical properties that differentiate signals. This paper describes a novel approach to sound source separation, called computational auditory scene analysis-enhanced beamforming (CASA-EB), that achieves increased separation performance by combining the complementary techniques of CASA (a source attribute technique) with beamforming (a location attribute technique), complementary in the sense that they use independent attributes for signal separation. CASA-EB performs sound source separation by temporally and spatially filtering a multichannel input signal, and then grouping the resulting signal components into separated signals, based on source and location attributes. Experimental results show increased signal-to-interference ratio with CASA-EB over beamforming or CASA alone.