Communications of the ACM
How to construct random functions
Journal of the ACM (JACM)
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Prediction-preserving reducibility
Journal of Computer and System Sciences - 3rd Annual Conference on Structure in Complexity Theory, June 14–17, 1988
Computational learning theory: an introduction
Computational learning theory: an introduction
Cryptographic hardness of distribution-specific learning
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Cryptographic limitations on learning Boolean formulae and finite automata
Journal of the ACM (JACM)
Smooth on-line learning algorithms for hidden Markov models
Neural Computation
An introduction to computational learning theory
An introduction to computational learning theory
Computational Complexity of Machine Learning
Computational Complexity of Machine Learning
Optimizing two-dimensional search results presentation
Proceedings of the fourth ACM international conference on Web search and data mining
Closing the learning-planning loop with predictive state representations
International Journal of Robotics Research
A spectral algorithm for learning Hidden Markov Models
Journal of Computer and System Sciences
Spectral learning of latent-variable PCFGs
ACL '12 Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1
Spectral dependency parsing with latent variables
EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
Hi-index | 0.00 |
A simple result is presented that links the learning of hidden Markov models to results in complexity theory about nonlearnability of finite automata under certain cryptographic assumptions. Rather than considering all probability distributions, or even just certain specific ones, the learning of a hidden Markov model takes place under a distribution induced by the model itself.