An Algorithmic Theory of Learning: Robust Concepts and Random Projection

  • Authors:
  • Rosa I. Arriaga;Santosh Vempala

  • Affiliations:
  • -;-

  • Venue:
  • FOCS '99 Proceedings of the 40th Annual Symposium on Foundations of Computer Science
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

We study the phenomenon of cognitive learning from an algorithmic standpoint. How does the brain effectively learn concepts from a small number of examples, in spite of the fact that each example contains a huge amount of information? We provide a novel analysis for a model of ROBUST concept learning (closely related to ``margin classifiers''), and show that a relatively small number of examples are sufficient to learn rich concept classes (including threshold functions, boolean formulae and polynomial surfaces).As a result, we obtain simple intuitive proofs for the generalization bounds of Support Vector Machines. In addition, the new algorithms have several advantages --- they are faster, conceptually simpler, and highly resistant to noise. For example, a robust half-space can be PAC-learned in linear time using only a constant number of training examples, regardless of the number of attributes. A general (algorithmic) consequence of the model, that "more robust concepts are easier to learn", is supported by a multitude of psychological studies.