Communications of the ACM
Learning regular sets from queries and counterexamples
Information and Computation
The minimum consistent DFA problem cannot be approximated within and polynomial
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
Introduction to algorithms
A polynomial-time algorithm for the equivalence of probabilistic automata
SIAM Journal on Computing
Probably almost discriminative learning
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Learning Probabilistic Automata and Markov Chains via Queries
Machine Learning
On the Computational Complexity of Approximating Distributions by Probabilistic Automata
Machine Learning - Computational learning theory
Hidden Markov chains: convergence rates and the complexity of inference
Hidden Markov chains: convergence rates and the complexity of inference
Machine Learning
Machine Learning
Learning to model sequences generated by switching distributions
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Modeling protein families using probabilistic suffix trees
RECOMB '99 Proceedings of the third annual international conference on Computational molecular biology
The Hierarchical Hidden Markov Model: Analysis and Applications
Machine Learning
Local prediction approach for protein classification using probabilistic suffix trees
APBC '04 Proceedings of the second conference on Asia-Pacific bioinformatics - Volume 29
ICML '06 Proceedings of the 23rd international conference on Machine learning
Hi-index | 0.00 |
A hidden Markov chain (hmc) is a finite ergodic Markov chain in which each of the states is labelled 0 or 1. As the Markov chain moves through a random trajectory, the hmc emits a 0 or a 1 at each times step according to the label of the state just entered.The inference problem is to construct a mechanism which will emit 0's and 1's and which is equivalent to a given hmc in the sense of having identical long-term behavior. We define the inference problem in a learning setting in which an algorithm can query an oracle for the long-term probability of any binary string. We prove that inference is hard: any algorithm for inference must make exponentially many oracle calls. Our method is information-theoretic and does not depend on separation assumptions for any complexity classes. We show that the related discrimination problem is also hard, but that on a nontrivial subclass of hmc's there is a randomized algorithm for discrimination. Finally, we give a polynomial-time algorithm for reducing a hidden Markov chain to its minimal form, and from this there follows a new algorithm for equivalence of hmc's.