A computational auditory scene analysis system for speech segregation and robust speech recognition

  • Authors:
  • Yang Shao;Soundararajan Srinivasan;Zhaozhang Jin;DeLiang Wang

  • Affiliations:
  • Department of Computer Science and Engineering, The Ohio State University, Columbus, OH 43210, USA;Biomedical Engineering Department, The Ohio State University, Columbus, OH 43210, USA;Department of Computer Science and Engineering, The Ohio State University, Columbus, OH 43210, USA;Department of Computer Science and Engineering, The Ohio State University, Columbus, OH 43210, USA and Center for Cognitive Science, The Ohio State University, Columbus, OH 43210, USA

  • Venue:
  • Computer Speech and Language
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

A conventional automatic speech recognizer does not perform well in the presence of multiple sound sources, while human listeners are able to segregate and recognize a signal of interest through auditory scene analysis. We present a computational auditory scene analysis system for separating and recognizing target speech in the presence of competing speech or noise. We estimate, in two stages, the ideal binary time-frequency (T-F) mask which retains the mixture in a local T-F unit if and only if the target is stronger than the interference within the unit. In the first stage, we use harmonicity to segregate the voiced portions of individual sources in each time frame based on multipitch tracking. Additionally, unvoiced portions are segmented based on an onset/offset analysis. In the second stage, speaker characteristics are used to group the T-F units across time frames. The resulting masks are used in an uncertainty decoding framework for automatic speech recognition. We evaluate our system on a speech separation challenge and show that our system yields substantial improvement over the baseline performance.