Incremental ML estimation of HMM parameters for efficient training

  • Authors:
  • Y. Gotoh;H. F. Silverman

  • Affiliations:
  • Div. of Eng., Brown Univ., Providence, RI, USA;-

  • Venue:
  • ICASSP '96 Proceedings of the Acoustics, Speech, and Signal Processing, 1996. on Conference Proceedings., 1996 IEEE International Conference - Volume 02
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

Conventional training of a hidden Markov model (HMM) is performed by an expectation-maximization algorithm using a maximum likelihood (ML) criterion. It was reported that, using an incremental variant of maximum a posteriori estimation, substantial speed improvements could be obtained. The approach requires a prior distribution when the training starts, although it is difficult to find an appropriate prior for some cases. This paper presents a new approach for achieving an efficient training of HMM parameters using the standard ML criterion. A prior distribution is not required. The algorithm sequentially selects a subset of data from the training set, updates the parameters from the subset, then iterates until convergence. There is a solid theoretical foundation that ensures a monotone likelihood improvement; thus stable convergence is guaranteed. Experimental results indicate substantially faster convergence than the standard batch training algorithm while holding the same level of recognition performance.