Communications of the ACM
On the learnability of Boolean formulae
STOC '87 Proceedings of the nineteenth annual ACM symposium on Theory of computing
Learning in the presence of malicious errors
STOC '88 Proceedings of the twentieth annual ACM symposium on Theory of computing
Computational limitations on learning from examples
Journal of the ACM (JACM)
Crytographic limitations on learning Boolean formulae and finite automata
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
The Strength of Weak Learnability
Machine Learning
A learning criterion for stochastic rules
COLT '90 Proceedings of the third annual workshop on Computational learning theory
Tracking drifting concepts using random examples
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Decision theoretic generalizations of the PAC model for neural net and other learning applications
Information and Computation
Efficient distribution-free learning of probabilistic concepts
Journal of Computer and System Sciences - Special issue: 31st IEEE conference on foundations of computer science, Oct. 22–24, 1990
Computers and Intractability: A Guide to the Theory of NP-Completeness
Computers and Intractability: A Guide to the Theory of NP-Completeness
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Learning and robust learning of product distributions
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
Learning with restricted focus of attention
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
On the complexity of function learning
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
Efficient agnostic PAC-learning with simple hypothesis
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
Fat-shattering and the learnability of real-valued functions
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
Efficient learning of continuous neural networks
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
Efficient distribution-free learning of probabilistic concepts
Journal of Computer and System Sciences - Special issue: 31st IEEE conference on foundations of computer science, Oct. 22–24, 1990
Weakly learning DNF and characterizing statistical query learning using Fourier analysis
STOC '94 Proceedings of the twenty-sixth annual ACM symposium on Theory of computing
Concept learning with geometric hypotheses
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
More or less efficient agnostic learning of convex polygons
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
More theorems about scale-sensitive dimensions and learning
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
PAC learning intersections of halfspaces with membership queries (extended abstract)
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
Learning of depth two neural networks with constant fan-in at the hidden nodes (extended abstract)
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
An Experimental and Theoretical Comparison of Model SelectionMethods
Machine Learning - Special issue on the eighth annual conference on computational learning theory, (COLT '95)
On the complexity of learning for a spiking neuron (extended abstract)
COLT '97 Proceedings of the tenth annual conference on Computational learning theory
Learning probabilistically consistent linear threshold functions
COLT '97 Proceedings of the tenth annual conference on Computational learning theory
Cross-validation for binary classification by real-valued functions: theoretical analysis
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Property testing and its connection to learning and approximation
Journal of the ACM (JACM)
On Restricted-Focus-of-Attention Learnability of Boolean Functions
Machine Learning - Special issue on the ninth annual conference on computational theory (COLT '96)
Probabilistic ’generalization‘ of functions and dimension-based uniform convergence results
Statistics and Computing
The VC-Dimension of Subclasses of Pattern
ALT '99 Proceedings of the 10th International Conference on Algorithmic Learning Theory
Maximizing Agreements and CoAgnostic Learning
ALT '02 Proceedings of the 13th International Conference on Algorithmic Learning Theory
Practical Algorithms for On-line Sampling
DS '98 Proceedings of the First International Conference on Discovery Science
On Using Extended Statistical Queries to Avoid Membership Queries
COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
Bounds for the Minimum Disagreement Problem with Applications to Learning Theory
COLT '02 Proceedings of the 15th Annual Conference on Computational Learning Theory
Agnostic Learning Nonconvex Function Classes
COLT '02 Proceedings of the 15th Annual Conference on Computational Learning Theory
On using extended statistical queries to avoid membership queries
The Journal of Machine Learning Research
Learning Local Transductions Is Hard
Journal of Logic, Language and Information
Maximizing agreements and coagnostic learning
Theoretical Computer Science - Algorithmic learning theory(ALT 2002)
CATS '07 Proceedings of the thirteenth Australasian symposium on Theory of computing - Volume 65
The lack of a priori distinctions between learning algorithms
Neural Computation
Robust support vector machine training via convex outlier ablation
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Provably bounded-optimal agents
Journal of Artificial Intelligence Research
Provably bounded optimal agents
IJCAI'93 Proceedings of the 13th international joint conference on Artifical intelligence - Volume 1
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 1
The complexity of theory revision
Artificial Intelligence
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Learnability, Stability and Uniform Convergence
The Journal of Machine Learning Research
A simple feature extraction for high dimensional image representations
SLSFS'05 Proceedings of the 2005 international conference on Subspace, Latent Structure and Feature Selection
Learning linear and kernel predictors with the 0-1 loss function
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Hi-index | 0.00 |
In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation.We give a number of both positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for learning in a model for problems involving hidden variables.