Communications of the ACM
Linear function neurons: Structure and training
Biological Cybernetics
On the learnability of Boolean formulae
STOC '87 Proceedings of the nineteenth annual ACM symposium on Theory of computing
Information Processing Letters
Learning regular sets from queries and counterexamples
Information and Computation
Quantifying inductive bias: AI learning algorithms and Valiant's learning framework
Artificial Intelligence
Learning in the presence of malicious errors
STOC '88 Proceedings of the twentieth annual ACM symposium on Theory of computing
Computational limitations on learning from examples
Journal of the ACM (JACM)
Crytographic limitations on learning Boolean formulae and finite automata
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Mistake bounds and logarithmic linear-threshold learning algorithms
Mistake bounds and logarithmic linear-threshold learning algorithms
Proceedings of the first annual workshop on Computational learning theory
COLT88 1988 Working on Computational Learning Theory
Training a 3-node neural network is NP-complete
COLT '88 Proceedings of the first annual workshop on Computational learning theory
Learnability by fixed distributions
COLT '88 Proceedings of the first annual workshop on Computational learning theory
Proceedings of the second annual workshop on Computational learning theory
COLT'89 2nd Workshop on Computational Learning Theory
From on-line to batch learning
COLT '89 Proceedings of the second annual workshop on Computational learning theory
Equivalence of models for polynomial learnability
Information and Computation
Learning read-once formulas with queries
Journal of the ACM (JACM)
Decision theoretic generalizations of the PAC model for neural net and other learning applications
Information and Computation
Learning Conjunctive Concepts in Structural Domains
Machine Learning
On Learning Sets and Functions
Machine Learning
Machine Learning
Machine Learning
Machine Learning
Machine Learning
When Are k-Nearest Neighbor and Back Propagation Accurate for Feasible Sized Sets of Examples?
Proceedings of the EURASIP Workshop 1990 on Neural Networks
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
Predicting (0, 1)-functions on randomly drawn points
SFCS '88 Proceedings of the 29th Annual Symposium on Foundations of Computer Science
The weighted majority algorithm
SFCS '89 Proceedings of the 30th Annual Symposium on Foundations of Computer Science
Learning disjunction of conjunctions
IJCAI'85 Proceedings of the 9th international joint conference on Artificial intelligence - Volume 1
The PAC-learnability of planning algorithms: Investigating simple planning domains
Information Sciences: an International Journal
From relational databases to belief networks
UAI'91 Proceedings of the Seventh conference on Uncertainty in Artificial Intelligence
Hi-index | 0.00 |
This paper surveys some recent theoretical results on the efficiency of machine learning algorithms. The main tool described is the notion of Probably Approximately Correct (PAC) learning, introduced by Valiant. We define this learning model and then look at some of the results obtained in it. We then consider some criticisms of the PAC model and the extensions proposed to address these criticisms. Finally, we look briefly at other models recently proposed in computational learning theory.