Information Processing Letters
Learning k-DNF with noise in the attributes
COLT '88 Proceedings of the first annual workshop on Computational learning theory
Improved learning of AC0 functions
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Constant depth circuits, Fourier transform, and learnability
Journal of the ACM (JACM)
On learning from noisy and incomplete examples
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Selection of relevant features and examples in machine learning
Artificial Intelligence - Special issue on relevance
Machine Learning
Uniform-distribution attribute noise learnability
Information and Computation
On learning monotone DNF under product distributions
Information and Computation
Learning functions of k relevant variables
Journal of Computer and System Sciences - Special issue: STOC 2003
The influence of variables on Boolean functions
SFCS '88 Proceedings of the 29th Annual Symposium on Foundations of Computer Science
Quantum Algorithms for Learning and Testing Juntas
Quantum Information Processing
When does greedy learning of relevant attributes succeed?: a fourier-based characterization
COCOON'07 Proceedings of the 13th annual international conference on Computing and Combinatorics
Hi-index | 0.00 |
The combination of two major challenges in algorithmic learning is investigated: dealing with huge amounts of irrelevant information and learning from noisy data. It is shown that large classes of Boolean concepts that only depend on a small fraction of their variables—so-called juntas—can be learned efficiently from uniformly distributed examples that are corrupted by random attribute and classification noise. We present solutions to cope with the manifold problems that inhibit a straightforward generalization of the noise-free case. Additionally, we extend our methods to non-uniformly distributed examples and derive new results for monotone juntas in this setting. We assume that the attribute noise is generated by a product distribution. Otherwise fault-tolerant learning is in general impossible which follows from the construction of a noise distribution P and a concept class $\mathcal{C}$ such that it is impossible to learn $\mathcal{C}$ under P-noise.