The power of amnesia: learning probabilistic automata with variable memory length
Machine Learning - Special issue on COLT '94
On the learnability and usage of acyclic probabilistic finite automata
Journal of Computer and System Sciences - Special issue on the eighth annual workshop on computational learning theory, July 5–8, 1995
Real-time American Sign Language recognition from video using hidden Markov models
ISCV '95 Proceedings of the International Symposium on Computer Vision
PAC-learnability of Probabilistic Deterministic Finite State Automata
The Journal of Machine Learning Research
Symbolic dynamic analysis of complex systems for anomaly detection
Signal Processing
Blind construction of optimal nonlinear recursive predictors for discrete sequences
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
Observable Operator Models for Discrete Stochastic Time Series
Neural Computation
Looping suffix tree-based inference of partially observable hidden state
ICML '06 Proceedings of the 23rd international conference on Machine learning
Computing in Science and Engineering
PAC-Learning of markov models with hidden state
ECML'06 Proceedings of the 17th European conference on Machine Learning
Hi-index | 0.10 |
Automatic structure discovery is desirable in many Markov model applications where a good topology (states and transitions) is not known a priori. CSSR is an established pattern discovery algorithm for stationary and ergodic stochastic symbol sequences that learns a predictively optimal Markov representation consisting of so-called causal states. By means of a novel algebraic criterion, we prove that the causal states of a simple process disturbed by random errors frequently are too complex to be learned fully, making CSSR diverge. In fact, the causal state representation of many hidden Markov models, representing simple but noise-disturbed data, has infinite cardinality. We also report that these problems can be solved by endowing CSSR with the ability to make approximations. The resulting algorithm, robust causal states (RCS), is able to recover the underlying causal structure from data corrupted by random substitutions, as is demonstrated both theoretically and in an experiment. The algorithm has potential applications in areas such as error correction and learning stochastic grammars.