Sequential inductive learning

  • Authors:
  • Jonathan Gratch

  • Affiliations:
  • University of Southern California, Information Sciences Institute, Marina del Rey, CA

  • Venue:
  • AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
  • Year:
  • 1996

Quantified Score

Hi-index 0.01

Visualization

Abstract

This article advocates a new model for inductive learning. Called sequential induction, it helps bridge classical fixed-sample learning techniques (which are efficient but difficult to formally characterize), and worst-case approaches (which provide strong statistical guarantees but are too inefficient for practical use). Learning proceeds as a sequence of decisions which are informed by training data. By analyzing induction at the level of these decisions, and by utilizing the only enough data to make each decision, sequential induction provides statistical guarantees but with substantially less data than worst-case methods require. The sequential inductive model is also useful as a method for determining a sufficient sample size for inductive learning and as such, is relevant to learning problems where the preponderance of data or the cost of gathering data precludes the use of traditional methods.