Decision trees and influences of variables over product probability spaces
Combinatorics, Probability and Computing
Bounding the average sensitivity and noise sensitivity of polynomial threshold functions
Proceedings of the forty-second ACM symposium on Theory of computing
Learning and lower bounds for AC0 with threshold gates
APPROX/RANDOM'10 Proceedings of the 13th international conference on Approximation, and 14 the International conference on Randomization, and combinatorial optimization: algorithms and techniques
Approximating the influence of monotone boolean functions in O(√n) query complexity
APPROX'11/RANDOM'11 Proceedings of the 14th international workshop and 15th international conference on Approximation, randomization, and combinatorial optimization: algorithms and techniques
Hardness results for agnostically learning low-degree polynomial threshold functions
Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms
Approximating the Influence of Monotone Boolean Functions in O(√n) Query Complexity
ACM Transactions on Computation Theory (TOCT)
Approximation by DNF: examples and counterexamples
ICALP'07 Proceedings of the 34th international conference on Automata, Languages and Programming
Learnability of DNF with representation-specific queries
Proceedings of the 4th conference on Innovations in Theoretical Computer Science
A composition theorem for the fourier entropy-influence conjecture
ICALP'13 Proceedings of the 40th international conference on Automata, Languages, and Programming - Volume Part I
Improved Approximation of Linear Threshold Functions
Computational Complexity
Hi-index | 0.02 |
We give an algorithm that learns any monotone Boolean function $\fisafunc$ to any constant accuracy, under the uniform distribution, in time polynomial in $n$ and in the decision tree size of $f.$ This is the first algorithm that can learn arbitrary monotone Boolean functions to high accuracy, using random examples only, in time polynomial in a reasonable measure of the complexity of $f.$ A key ingredient of the result is a new bound showing that the average sensitivity of any monotone function computed by a decision tree of size $s$ must be at most $\sqrt{\log s}$. This bound has proved to be of independent utility in the study of decision tree complexity [O. Schramm, R. O'Donnell, M. Saks, and R. Servedio, Every decision tree has an influential variable, in Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science, IEEE Computer Society, Los Alamitos, CA, 2005, pp. 31-39]. We generalize the basic inequality and learning result described above in various ways—specifically, to partition size (a stronger complexity measure than decision tree size), $p$-biased measures over the Boolean cube (rather than just the uniform distribution), and real-valued (rather than just Boolean-valued) functions.