Communications of the ACM
On the complexity of inductive inference
Information and Control
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
A polynomial-time algorithm for learning k-variable pattern languages from examples
COLT '89 Proceedings of the second annual workshop on Computational learning theory
Polynomial-time inference of arbitrary pattern languages
New Generation Computing - Selected papers from the international workshop on algorithmic learning theory,1990
Equivalence of models for polynomial learnability
Information and Computation
Exact identification of read-once formulas using fixed points of amplification functions
SIAM Journal on Computing
Incremental learning from positive data
Journal of Computer and System Sciences
Learning one-variable pattern languages in linear average time
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Incremental concept learning for bounded data mining
Information and Computation
An average-case optimal one-variable pattern language learner
Journal of Computer and System Sciences - Eleventh annual conference on computational learning theory&slash;Twelfth Annual IEEE conference on computational complexity
Theoretical Computer Science
Annals of Mathematics and Artificial Intelligence
Stochastic Finite Learning of the Pattern Languages
Machine Learning
Inductive Inference, DFAs, and Computational Complexity
AII '89 Proceedings of the International Workshop on Analogical and Inductive Inference
The VC-Dimension of Subclasses of Pattern
ALT '99 Proceedings of the 10th International Conference on Algorithmic Learning Theory
A complete and tight average-case analysis of learning monomials
STACS'99 Proceedings of the 16th annual conference on Theoretical aspects of computer science
Breaking Anonymity by Learning a Unique Minimum Hitting Set
CSR '09 Proceedings of the Fourth International Computer Science Symposium in Russia on Computer Science - Theory and Applications
Hi-index | 0.00 |
Recently, we have developed a learning model, called stochastic finite learning, that makes a connection between concepts from PAC learning and inductive inference learning models. The motivation for this work is as follows. Within Gold's (1967) model of learning in the limit many important learning problems can be formalized and it can be shown that they are algorithmically solvable in principle. However, since a limit learner is only supposed to converge, one never knows at any particular learning stage whether or not it has already been successful. Such an uncertainty may be not acceptable in many applications. The present paper surveys the new approach to overcome this uncertainty that potentially has a wide range of applicability.