The Strength of Weak Learnability
Machine Learning
Efficient distribution-free learning of probabilistic concepts
Journal of Computer and System Sciences - Special issue: 31st IEEE conference on foundations of computer science, Oct. 22–24, 1990
On the boosting ability of top-down decision tree learning algorithms
Journal of Computer and System Sciences
Learning Monotone Decision Trees in Polynomial Time
CCC '06 Proceedings of the 21st Annual IEEE Conference on Computational Complexity
ALT '08 Proceedings of the 19th international conference on Algorithmic Learning Theory
Improved Approximation of Linear Threshold Functions
Computational Complexity
Hi-index | 0.00 |
Predicting class probabilities and other real-valued quantities is often more useful than binary classification, but comparatively little work in PAC-style learning addresses this issue. We show that two rich classes of real-valued functions are learnable in the probabilistic-concept framework of Kearns and Schapire. Let X be a subset of Euclidean space and f be a real-valued function on X. We say f is a nested halfspace function if, for each real threshold t, the set {x ∈ X|f(x) ≤ t}, is a halfspace. This broad class of functions includes binary halfspaces with a margin (e.g., SVMs) as a special case. We give an efficient algorithm that provably learns (Lipschitz-continuous) nested halfspace functions on the unit ball. The sample complexity is independent of the number of dimensions. We also introduce the class of uphill decision trees, which are real-valued decision trees (sometimes called regression trees) in which the sequence of leaf values is non-decreasing. We give an efficient algorithm for provably learning uphill decision trees whose sample complexity is polynomial in the number of dimensions but independent of the size of the tree (which may be exponential). Both of our algorithms employ a real-valued extension of Mansour and McAllester's boosting algorithm.