A bound on modeling error in observable operator models and an associated learning algorithm

  • Authors:
  • Ming-Jie Zhao;Herbert Jaeger;Michael Thon

  • Affiliations:
  • -;-;-

  • Venue:
  • Neural Computation
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Observable operator models (OOMs) generalize hidden Markov models (HMMs) and can be represented in a structurally similar matrix formalism. The mathematical theory of OOMs gives rise to a family of constructive, fast, and asymptotically correct learning algorithms, whose statistical efficiency, however, depends crucially on the optimization of two auxiliary transformation matrices. This optimization task is nontrivial; indeed, even formulating computationally accessible optimality criteria is not easy. Here we derive how a bound on the modeling error of an OOM can be expressed in terms of these auxiliary matrices, which in turn yields an optimization procedure for them and finally affords us with a complete learning algorithm: the error-controlling algorithm. Models learned by this algorithm have an assured error bound on their parameters. The performance of this algorithm is illuminated by comparisons with two types of HMMs trained by the expectation-maximization algorithm, with the efficiency-sharpening algorithm, another recently found learning algorithm for OOMs, and with predictive state representations (Littman & Sutton, 2001) trained by methods representing the state of the art in that field.