A Microchoice Bound for Continuous-Space Classification Algorithms

  • Authors:
  • Yoram Gat

  • Affiliations:
  • Intel Research, Microprocessor Research Lab, SC12-303, 2200 Mission College Blvd., Santa Clara, CA 95054-1537. yoram.gat@intel.com

  • Venue:
  • Machine Learning
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Classifiers are often constructed iteratively by introducing changes sequentially to an initial classifier. Langford and Blum (COLT'99: Proceedings of the 12th Annual Conference on Computational Learning Theory, 1999, San Mateo, CA: Morgan Kaufmann, pp. 209–214) take advantage of this structure (the microchoice structure), to obtain bounds for the generalization ability of such algorithms. These bounds can be sharper than more general bounds. This paper extends the applicability of the microchoice approach to the more realistic case where the classifier space is continuous and the sequence of changes is not restricted to a pre-fixed finite set.Proving the microchoice bound in the continuous case relies on a conditioning technique that is often used in proving VC results. It is shown how this technique can be used to convert any learning algorithm over a continuous space into a family of algorithms over discrete spaces.The new continuous microchoice result is applied to obtain a bound for the generalization ability of the perceptron algorithm. The greedy nature of the perceptron algorithm, which generates new classifiers by introducing corrections based on misclassified points, is exploited to obtain a generalization bound that has an asymptotic form of O(1/\sqrt{n}), where n is the training set size.