Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
A polynomial time algorithm that learns two hidden unit nets
Neural Computation
An introduction to computational learning theory
An introduction to computational learning theory
Learning an intersection of a constant number of halfspaces over a uniform distribution
Journal of Computer and System Sciences - Special issue: papers from the 32nd and 34th annual symposia on foundations of computer science, Oct. 2–4, 1991 and Nov. 3–5, 1993
A Random Sampling based Algorithm for Learning the Intersection of Half-spaces
FOCS '97 Proceedings of the 38th Annual Symposium on Foundations of Computer Science
An Algorithmic Theory of Learning: Robust Concepts and Random Projection
FOCS '99 Proceedings of the 40th Annual Symposium on Foundations of Computer Science
A geometric theory of outliers and perturbation
A geometric theory of outliers and perturbation
Agnostically Learning Halfspaces
FOCS '05 Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
The geometry of logconcave functions and sampling algorithms
Random Structures & Algorithms
Learning Geometric Concepts via Gaussian Surface Area
FOCS '08 Proceedings of the 2008 49th Annual IEEE Symposium on Foundations of Computer Science
The spectral method for general mixture models
COLT'05 Proceedings of the 18th annual conference on Learning Theory
On spectral learning of mixtures of distributions
COLT'05 Proceedings of the 18th annual conference on Learning Theory
An Inequality for Nearly Log-Concave Distributions With Applications to Learning
IEEE Transactions on Information Theory
A random-sampling-based algorithm for learning intersections of halfspaces
Journal of the ACM (JACM)
Hi-index | 0.00 |
In 1990, E. Baum gave an elegant polynomial-time algorithm for learning the intersection of two origin-centered halfspaces with respect to any symmetric distribution (i.e., any ${\cal D}$ such that ${\cal D}(E) = {\cal D}(-E)$) [3]. Here we prove that his algorithm also succeeds with respect to any mean zero distribution with a log-concave density (a broad class of distributions that need not be symmetric). As far as we are aware, prior to this work, it was not known how to efficiently learn any class of intersections of halfspaces with respect to log-concave distributions. The key to our proof is a "Brunn-Minkowski" inequality for log-concave densities that may be of independent interest.