A null space method for over-complete blind source separation

  • Authors:
  • Ray-Bing Chen;Ying Nian Wu

  • Affiliations:
  • Institute of Statistics, National University of Kaohsiung, Kaohsiung 811, Taiwan, ROC;Department of Statistics, University of California, Los Angeles, CA 90095, USA

  • Venue:
  • Computational Statistics & Data Analysis
  • Year:
  • 2007

Quantified Score

Hi-index 0.03

Visualization

Abstract

In blind source separation, there are M sources that produce sounds independently and continuously over time. These sounds are then recorded by m receivers. The sound recorded by each receiver at each time point is a linear superposition of the sounds produced by the M sources at the same time point. The problem of blind source separation is to recover the sounds of the sources from the sounds recorded by the receivers, without knowledge of the mxM mixing matrix that transforms the sounds of the sources to the sounds of the receivers at each time point. Over-complete separation refers to the situation where the number of sources M is greater than the number of receivers m, so that the source sounds cannot be uniquely solved from the receiver sounds even if the mixing matrix is known. In this paper, we propose a null space representation for the over-complete blind source separation problem. This representation explicitly identifies the solution space of the source sounds in terms of the null space of the mixing matrix using singular value decomposition. Under this representation, the problem can be posed in the framework of Bayesian latent variable model, where the mixing matrix and the source sounds can be inferred based on their posterior distributions. We then propose a null space algorithm for Markov chain Monte Carlo posterior sampling. We illustrate the algorithm using several examples under two different statistical assumptions about the independent source sounds. The blind source separation problem is mathematically equivalent to the independent component analysis problem. So our method can be equally applied to over-complete independent component analysis for unsupervised learning of high-dimensional data.