A constrained EM algorithm for univariate normal mixtures
Journal of Statistical Computation and Simulation
Blind separation of sources, Part II: problems statement
Signal Processing
Factorial Hidden Markov Models
Machine Learning - Special issue on learning with probabilistic representations
Neural Computation
Unsupervised classification with non-Gaussian mixture models using ICA
Proceedings of the 1998 conference on Advances in neural information processing systems II
Bayesian Image Restoration: An Application to Edge-Preserving Surface Recovery
IEEE Transactions on Pattern Analysis and Machine Intelligence
A blind source separation technique using second-order statistics
IEEE Transactions on Signal Processing
Equivariant adaptive source separation
IEEE Transactions on Signal Processing
Blind source separation-semiparametric statistical approach
IEEE Transactions on Signal Processing
Blind separation of non-stationary sources using continuous density hidden Markov models
Digital Signal Processing
Hi-index | 0.00 |
This paper considers the problem of source separation in the case of noisy instantaneous mixtures. In a previous work [1], sources have been modeled by a mixture of Gaussians leading to an hierarchical Bayesian model by considering the labels of the mixture as i.i.d hidden variables. We extend this modelization to incorporate a Markovian structure for the labels. This extension is important for practical applications which are abundant: unsupervised classification and segmentation, pattern recognition and speech signal processing.In order to estimate the mixing matrix and the a priori model parameters, we consider observations as incomplete data. The missing data are sources and labels: sources are missing data for observations and labels are missing data for incomplete missing sources. This hierarchical modelization leads to specific restoration maximization type algorithms. Restoration step can be held in three different manners: (i) Complete likelihood is estimated by its conditional expectation. This leads to the EM (expectation-maximization) algorithm [2], (ii) Missing data are estimated by their maximum a posteriori. This leads to JMAP (Joint maximum a posteriori) algorithm [3], (iii) Missing data are sampled from their a posteriori distributions. This leads to the SEM (stochastic EM) algorithm [4]. A Gibbs sampling scheme is implemented to generate missing data. We have also introduced a relaxation strategy into these algorithms to reduce the computational cost which is due to the exponential influence of the number of source components and the number of the mixture Gaussian components.