Communications of the ACM
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
The design and analysis of efficient learning algorithms
The design and analysis of efficient learning algorithms
Learning in the presence of malicious errors
SIAM Journal on Computing
Efficient noise-tolerant learning from statistical queries
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Weakly learning DNF and characterizing statistical query learning using Fourier analysis
STOC '94 Proceedings of the twenty-sixth annual ACM symposium on Theory of computing
Toward Efficient Agnostic Learning
Machine Learning - Special issue on computational learning theory, COLT'92
Learning with unreliable boundary queries
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
On learning from noisy and incomplete examples
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Noise-tolerant learning near the information-theoretic bound
STOC '96 Proceedings of the twenty-eighth annual ACM symposium on Theory of computing
Noise-tolerant distribution-free learning of general geometric concepts
STOC '96 Proceedings of the twenty-eighth annual ACM symposium on Theory of computing
A composition theorem for learning algorithms with applications to geometric concept classes
STOC '97 Proceedings of the twenty-ninth annual ACM symposium on Theory of computing
A new composition theorem for learning algorithms
STOC '98 Proceedings of the thirtieth annual ACM symposium on Theory of computing
Discrete Applied Mathematics - Special issue: Vapnik-Chervonenkis dimension
Machine Learning
Randomized Hypotheses and Minimum Disagreement Hypotheses for Learning with Noise
EuroCOLT '97 Proceedings of the Third European Conference on Computational Learning Theory
General bounds on statistical query learning and PAC learning with noise via hypothesis boosting
SFCS '93 Proceedings of the 1993 IEEE 34th Annual Foundations of Computer Science
Learning disjunction of conjunctions
IJCAI'85 Proceedings of the 9th international joint conference on Artificial intelligence - Volume 1
Security analysis of online centroid anomaly detection
The Journal of Machine Learning Research
Hi-index | 0.00 |
We introduce a new model for learning in the presence of noise, which we call the Nasty Noise model. This model generalizes previously considered models of learning with noise. The learning process in this model, which is a variant of the PAC model, proceeds as follows: Suppose that the learning algorithm during its execution asks for m examples. The examples that the algorithm gets are generated by a nasty adversary that works according to the following steps. First, the adversary chooses m examples (independently) according to the fixed (but unknown to the learning algorithm) distribution D as in the PAC-model. Then the powerful adversary, upon seeing the specific m examples that were chosen (and using his knowledge of the target function, the distribution D and the learning algorithm), is allowed to remove a fraction of the examples at its choice, and replace these examples by the same number of arbitrary examples of its choice; the m modified examples are then given to the learning algorithm. The only restriction on the adversary is that the number of examples that the adversary is allowed to modify should be distributed according to a binomial distribution with parameters η (the noise rate) and m. On the negative side, we prove that no algorithm can achieve accuracy of Ɛ 2η. This algorithm may not be efficient; however, we also show that a fairly wide family of concept classes can be efficiently learned in the presence of nasty noise.