Matrix computations (3rd ed.)
Algorithms on strings, trees, and sequences: computer science and computational biology
Algorithms on strings, trees, and sequences: computer science and computational biology
Learning predictive state representations in dynamical systems without reset
ICML '05 Proceedings of the 22nd international conference on Machine learning
Observable Operator Models for Discrete Stochastic Time Series
Neural Computation
Learning predictive state representations using non-blind policies
ICML '06 Proceedings of the 23rd international conference on Machine learning
Predictive linear-Gaussian models of controlled stochastic dynamical systems
ICML '06 Proceedings of the 23rd international conference on Machine learning
Predictive state representations with options
ICML '06 Proceedings of the 23rd international conference on Machine learning
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Modeling probability distributions with predictive state representations
Modeling probability distributions with predictive state representations
Hi-index | 0.00 |
Observable operator models (OOMs) are a class of models for stochastic processes that properly subsumes the class that can be modeled by finite-dimensional hidden Markov models (HMMs). One of the main advantages of OOMs over HMMs is that they admit asymptotically correct learning algorithms. A series of learning algorithms has been developed, with increasing computational and statistical efficiency, whose recent culmination was the error-controlling (EC) algorithm developed by the first author. The EC algorithm is an iterative, asymptotically correct algorithm that yields (and minimizes) an assured upper bound on the modeling error. The run time is faster by at least one order of magnitude than EM-based HMM learning algorithms and yields significantly more accurate models than the latter. Here we present a significant improvement of the EC algorithm: the constructive error-controlling (CEC) algorithm. CEC inherits from EC the main idea of minimizing an upper bound on the modeling error but is constructive where EC needs iterations. As a consequence, we obtain further gains in learning speed without loss in modeling accuracy.