Minimum Generalization Via Reflection: A Fast Linear Threshold Learner

  • Authors:
  • Steven Hampson;Dennis Kibler

  • Affiliations:
  • Department of Information and Computer Science, University of California at Irvine, CA 92717. hampson@ics.uci.edu;Department of Information and Computer Science, University of California at Irvine, CA 92717. kibler@ics.uci.edu

  • Venue:
  • Machine Learning
  • Year:
  • 1999

Quantified Score

Hi-index 0.01

Visualization

Abstract

The number of adjustments required to learn the average LTU function of d features, each of which can take on n equally spaced values, grows as approximately n^2dwhen the standard perceptron training algorithm is used on the complete input space of n points and perfect classificationis required. We demonstrate a simple modification that reduces the observedgrowth rate in the number of adjustments to approximatelyd^2(log (d) + log(n)) with most, but not all input presentation orders. A similar speed-up isalso produced by applying the simple but computationally expensive heuristic“don‘t overgeneralize” to the standard training algorithm. This performance is very close to the theoretical optimum for learning LTU functions by any method, and is evidence that perceptron-like learning algorithms can learn arbitrary LTU functions in polynomial, ratherthan exponential time under normal training conditions.Similar modifications can be applied to theWinnow algorithm, achieving similar performance improvements anddemonstrating the generality of the approach.