A GMM sound source model for blind speech separation in under-determined conditions

  • Authors:
  • Yasuharu Hirasawa;Naoki Yasuraoka;Toru Takahashi;Tetsuya Ogata;Hiroshi G. Okuno

  • Affiliations:
  • Graduate School of Informatics, Kyoto University, Kyoto, Japan;Graduate School of Informatics, Kyoto University, Kyoto, Japan;Graduate School of Informatics, Kyoto University, Kyoto, Japan;Graduate School of Informatics, Kyoto University, Kyoto, Japan;Graduate School of Informatics, Kyoto University, Kyoto, Japan

  • Venue:
  • LVA/ICA'12 Proceedings of the 10th international conference on Latent Variable Analysis and Signal Separation
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper focuses on blind speech separation in under-determined conditions, that is, in the case when there are more sound sources than microphones. We introduce a sound source model based on the Gaussian mixture model (GMM) to represent a speech signal in the time-frequency domain, and derive rules for updating the model parameters using the auxiliary function method. Our GMM sound source model consists of two kinds of Gaussians: sharp ones representing harmonic parts and smooth ones representing nonharmonic parts. Experimental results reveal that our method outperforms the method based on non-negative matrix factorization (NMF) by 0.7dB in the signal-to-distortion ratio (SDR), and by 1.7dB in the signal-to-interference ratio (SIR). This means that our method effectively removes interference coming from other talkers.