Communications of the ACM
On the learnability of discrete distributions
STOC '94 Proceedings of the twenty-sixth annual ACM symposium on Theory of computing
Query size estimation by adaptive sampling
Selected papers of the 9th annual ACM SIGACT-SIGMOD-SIGART symposium on Principles of database systems
On the learnability and usage of acyclic probabilistic finite automata
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Probabilistic DFA Inference using Kullback-Leibler Divergence and Minimality
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Learning Stochastic Regular Grammars by Means of a State Merging Method
ICGI '94 Proceedings of the Second International Colloquium on Grammatical Inference and Applications
Hidden Markov Model} Induction by Bayesian Model Merging
Advances in Neural Information Processing Systems 5, [NIPS Conference]
PAC-learnability of Probabilistic Deterministic Finite State Automata
The Journal of Machine Learning Research
Learning low dimensional predictive representations
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Looping suffix tree-based inference of partially observable hidden state
ICML '06 Proceedings of the 23rd international conference on Machine learning
Towards Feasible PAC-Learning of Probabilistic Deterministic Finite Automata
ICGI '08 Proceedings of the 9th international colloquium on Grammatical Inference: Algorithms and Applications
Learning PDFA with asynchronous transitions
ICGI'10 Proceedings of the 10th international colloquium conference on Grammatical inference: theoretical results and applications
A lower bound for learning distributions generated by probabilistic automata
ALT'10 Proceedings of the 21st international conference on Algorithmic learning theory
Learning probabilistic automata: A study in state distinguishability
Theoretical Computer Science
Picking up the pieces: Causal states in noisy data, and how to recover them
Pattern Recognition Letters
Hi-index | 0.00 |
The standard approach for learning Markov Models with Hidden State uses the Expectation-Maximization framework. While this approach had a significant impact on several practical applications (e.g. speech recognition, biological sequence alignment) it has two major limitations: it requires a known model topology, and learning is only locally optimal. We propose a new PAC framework for learning both the topology and the parameters in partially observable Markov models. Our algorithm learns a Probabilistic Deterministic Finite Automata (PDFA) which approximates a Hidden Markov Model (HMM) up to some desired degree of accuracy. We discuss theoretical conditions under which the algorithm produces an optimal solution (in the PAC-sense) and demonstrate promising performance on simple dynamical systems.