On approximate maximum-likelihood methods for blind identification: how to cope with the curse of dimensionality

  • Authors:
  • Steffen Barembruch;Aurélien Garivier;Eric Moulines

  • Affiliations:
  • Institut des Télécommunications/TELECOM Paris-Tech, Paris, France;Institut des Télécommunications/TELECOM Paris-Tech, Paris, France;Institut des Télécommunications/TELECOM Paris-Tech, Paris, France

  • Venue:
  • IEEE Transactions on Signal Processing
  • Year:
  • 2009

Quantified Score

Hi-index 35.68

Visualization

Abstract

We discuss approximate maximum-likelihood methods for blind identification and deconvolution. These algorithms are based on particle approximation versions of the expectation-maximization (EM) algorithm. We consider three different methods which differ in the way the posterior distribution of the symbols is computed. The first algorithm is a particle approximation method of the fixed-interval smoothing. The two-filter smoothing and the novel joined-two-filter smoothing involve an additional backward-information filter. Because the state space is finite, it is furthermore possible at each step to consider all the offsprings of any given particle. It is then required to construct a novel particle swarm by selecting, among all these offsprings, particle positions and computing appropriate weights.We propose here a novel unbiased selection scheme, which minimizes the expected loss with respect to general distance functions. We compare these smoothing algorithms and selection schemes in a Monte Carlo experiment. We show a significant performance increase compared to the expectation maximization Viterbi algorithm (EMVA), a fixed-lag smoothing algorithm and the Block constant modulus algorithm (CMA).