Communications of the ACM
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
The design and analysis of efficient learning algorithms
The design and analysis of efficient learning algorithms
Learning in the presence of malicious errors
SIAM Journal on Computing
Efficient noise-tolerant learning from statistical queries
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Weakly learning DNF and characterizing statistical query learning using Fourier analysis
STOC '94 Proceedings of the twenty-sixth annual ACM symposium on Theory of computing
Toward Efficient Agnostic Learning
Machine Learning - Special issue on computational learning theory, COLT'92
Learning with unreliable boundary queries
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
On learning from noisy and incomplete examples
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Noise-tolerant distribution-free learning of general geometric concepts
STOC '96 Proceedings of the twenty-eighth annual ACM symposium on Theory of computing
A composition theorem for learning algorithms with applications to geometric concept classes
STOC '97 Proceedings of the twenty-ninth annual ACM symposium on Theory of computing
A new composition theorem for learning algorithms
STOC '98 Proceedings of the thirtieth annual ACM symposium on Theory of computing
Discrete Applied Mathematics - Special issue: Vapnik-Chervonenkis dimension
Sample-efficient strategies for learning in the presence of noise
Journal of the ACM (JACM)
Machine Learning
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Can machine learning be secure?
ASIACCS '06 Proceedings of the 2006 ACM Symposium on Information, computer and communications security
Incentive compatible regression learning
Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms
Classification algorithm sensitivity to training data with non representative attribute noise
Decision Support Systems
Strategyproof classification with shared inputs
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
On the limits of dictatorial classification
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Incentive compatible regression learning
Journal of Computer and System Sciences
Machine learning in adversarial environments
Machine Learning
Some recent results on local testing of sparse linear codes
Property testing
Some recent results on local testing of sparse linear codes
Property testing
Algorithms for strategyproof classification
Artificial Intelligence
PAC-Learning with general class noise models
KI'12 Proceedings of the 35th Annual German conference on Advances in Artificial Intelligence
Hi-index | 5.23 |
We introduce a new model for learning in the presence of noise, which we call the Nasty Noise model. This model generalizes previously considered models of learning with noise. The learning process in this model, which is a variant of the PAC model, proceeds as follows: Suppose that the learning algorithm during its execution asks for m examples. The examples that the algorithm gets are generated by a nasty adversary that works according to the following steps. First, the adversary chooses m examples (independently) according to a fixed (but unknown to the learning algorithm) distribution D as in the PAC-model. Then the powerful adversary, upon seeing the specific m examples that were chosen (and using his knowledge of the target function, the distribution D and the learning algorithm), is allowed to remove a fraction of the examples at its choice, and replace these examples by the same number of arbitrary examples of its choice; the m modified examples are then given to the learning algorithm. The only restriction on the adversary is that the number of examples that the adversary is allowed to modify should be distributed according to a binomial distribution with parameters η (the noise rate) and m.On the negative side, we prove that no algorithm can achieve accuracy of ε 2η in learning any non-trivial class of functions. We also give some lower bounds on the sample complexity required to achieve accuracy ε = 2η + Δ. On the positive side, we show that a polynomial (in the usual parameters, and in 1/(ε-2η)) number of examples suffice for learning any class of finite VC-dimension with accuracy ε 2η. This algorithm may not be efficient; however, we also show that a fairly wide family of concept classes can be efficiently learned in the presence of nasty noise.