Smooth on-line learning algorithms for hidden Markov models

  • Authors:
  • Pierre Baldi;Yves Chauvin

  • Affiliations:
  • -;-

  • Venue:
  • Neural Computation
  • Year:
  • 1994

Quantified Score

Hi-index 0.00

Visualization

Abstract

A simple learning algorithm for Hidden Markov Models (HMMs) ispresented together with a number of variations. Unlike otherclassical algorithms such as the Baum-Welch algorithm, thealgorithms described are smooth and can be used on-line (after eachexample presentation) or in batch mode, with or without the usualViterbi most likely path approximation. The algorithms have simpleexpressions that result from using a normalized-exponentialrepresentation for the HMM parameters. All the algorithms presentedare proved to be exact or approximate gradient optimizationalgorithms with respect to likelihood, log-likelihood, orcross-entropy functions, and as such are usually convergent. Thesealgorithms can also be casted in the more general EM(Expectation-Maximization) framework where they can be viewed asexact or approximate GEM (Generalized Expectation-Maximization)algorithms. The mathematical properties of the algorithms arederived in the appendix.