Denoising and recognition using hidden Markov models with observation distributions modeled by hidden Markov trees

  • Authors:
  • Diego H. Milone;Leandro E. Di Persia;María E. Torres

  • Affiliations:
  • Laboratory for Signals and Computational Intelligence, Department of Informatics, National University of Litoral, Campus Santa Fe (3000), Argentina;Laboratory for Signals and Computational Intelligence, Department of Informatics, National University of Litoral, Campus Santa Fe (3000), Argentina;Laboratory for Signals and Computational Intelligence, Department of Informatics, National University of Litoral, Campus Santa Fe (3000), Argentina and Laboratory for Signals and Non-linear Dynami ...

  • Venue:
  • Pattern Recognition
  • Year:
  • 2010

Quantified Score

Hi-index 0.01

Visualization

Abstract

Hidden Markov models have been found very useful for a wide range of applications in machine learning and pattern recognition. The wavelet transform has emerged as a new tool for signal and image analysis. Learning models for wavelet coefficients have been mainly based on fixed-length sequences, but real applications often require to model variable-length, very long or real-time sequences. In this paper, we propose a new learning architecture for sequences analyzed on short-term basis, but not assuming stationarity within each frame. Long-term dependencies will be modeled with a hidden Markov model which, in each internal state, will deal with the local dynamics in the wavelet domain, using a hidden Markov tree. The training algorithms for all the parameters in the composite model are developed using the expectation-maximization framework. This novel learning architecture could be useful for a wide range of applications. We detail two experiments with artificial and real data: model-based denoising and speech recognition. Denoising results indicate that the proposed model and learning algorithm are more effective than previous approaches based on isolated hidden Markov trees. In the case of the 'Doppler' benchmark sequence, with 1024 samples and additive white noise, the new method reduced the mean squared error from 1.0 to 0.0842. The proposed methods for feature extraction, modeling and learning, increased the phoneme recognition rates in 28.13%, with better convergence than models based on Gaussian mixtures.