Statistical methods for speech recognition
Statistical methods for speech recognition
On sequential Monte Carlo sampling methods for Bayesian filtering
Statistics and Computing
Inference in Hidden Markov Models (Springer Series in Statistics)
Inference in Hidden Markov Models (Springer Series in Statistics)
Fundamentals of wireless communication
Fundamentals of wireless communication
A sequential Monte Carlo method for adaptive blind timing estimation and data detection
IEEE Transactions on Signal Processing - Part I
Particle Filters for Joint Blind Equalization and Decoding in Frequency-Selective Channels
IEEE Transactions on Signal Processing
Adaptive joint detection and decoding in flat-fading channels via mixture Kalman filtering
IEEE Transactions on Information Theory
Bounds for the MSE performance of constant modulus estimators
IEEE Transactions on Information Theory
Approximate posterior distributions for convolutional two-level hidden Markov models
Computational Statistics & Data Analysis
Hi-index | 35.68 |
We discuss approximate maximum-likelihood methods for blind identification and deconvolution. These algorithms are based on particle approximation versions of the expectation-maximization (EM) algorithm. We consider three different methods which differ in the way the posterior distribution of the symbols is computed. The first algorithm is a particle approximation method of the fixed-interval smoothing. The two-filter smoothing and the novel joined-two-filter smoothing involve an additional backward-information filter. Because the state space is finite, it is furthermore possible at each step to consider all the offsprings of any given particle. It is then required to construct a novel particle swarm by selecting, among all these offsprings, particle positions and computing appropriate weights.We propose here a novel unbiased selection scheme, which minimizes the expected loss with respect to general distance functions. We compare these smoothing algorithms and selection schemes in a Monte Carlo experiment. We show a significant performance increase compared to the expectation maximization Viterbi algorithm (EMVA), a fixed-lag smoothing algorithm and the Block constant modulus algorithm (CMA).