Information Processing Letters
Improved learning of AC0 functions
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Constant depth circuits, Fourier transform, and learnability
Journal of the ACM (JACM)
Learning Boolean concepts in the presence of many irrelevant features
Artificial Intelligence
Weakly learning DNF and characterizing statistical query learning using Fourier analysis
STOC '94 Proceedings of the twenty-sixth annual ACM symposium on Theory of computing
A tight analysis of the greedy algorithm for set cover
STOC '96 Proceedings of the twenty-eighth annual ACM symposium on Theory of computing
Selection of relevant features and examples in machine learning
Artificial Intelligence - Special issue on relevance
A threshold of ln n for approximating set cover
Journal of the ACM (JACM)
Characterization and algorithms for greedily solvable transportation problems
SODA '90 Proceedings of the first annual ACM-SIAM symposium on Discrete algorithms
Can large fanin circuits perform reliable computations in the presence of faults?
Theoretical Computer Science - computing and combinatorics
Approximating Minimum Keys and Optimal Substructure Screens
COCOON '96 Proceedings of the Second Annual International Conference on Computing and Combinatorics
Theoretical Computer Science
Finding Essential Attributes from Binary Data
Annals of Mathematics and Artificial Intelligence
Learning functions of k relevant variables
Journal of Computer and System Sciences - Special issue: STOC 2003
Algorithm Design
Performance analysis of a greedy algorithm for inferring Boolean functions
Information Processing Letters
Approximation algorithms for combinatorial problems
Journal of Computer and System Sciences
Learning juntas in the presence of noise
TAMC'06 Proceedings of the Third international conference on Theory and Applications of Models of Computation
Algorithms for Inference, Analysis and Control of Boolean Networks
AB '08 Proceedings of the 3rd international conference on Algebraic Biology
Exploiting Product Distributions to Identify Relevant Variables of Correlation Immune Functions
The Journal of Machine Learning Research
Hi-index | 0.00 |
We introduce a new notion called Fourier-accessibility that allows us to precisely characterize the class of Boolean functions for which a standard greedy learning algorithm successfully learns all relevant attributes. If the target function is Fourier-accessible, then the success probability of the greedy algorithm can be made arbitrarily close to one. On the other hand, if the target function is not Fourier-accessible, then the error probability tends to one. Finally, we extend these results to the situation where the input data are corrupted by random attribute and classification noise and prove that greedy learning is quite robust against such errors.