Improving Generalization with Active Learning

  • Authors:
  • David Cohn;Les Atlas;Richard Ladner

  • Affiliations:
  • Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139. COHN@PSYCHE.MIT.EDU;Deptartment of Electrical Engineering, University of Washington, Seattle, WA 98195;Deptartment of Computer Science and Engineering, University of Washington, Seattle, WA 98195

  • Venue:
  • Machine Learning - Special issue on structured connectionist systems
  • Year:
  • 1994

Quantified Score

Hi-index 0.01

Visualization

Abstract

Active learning differs from “learning from examples” in that the learning algorithm assumes at least some control over what part of the input domain it receives information about. In some situations, active learning is provably more powerful than learning from examples alone, giving better generalization for a fixed number of training examples.In this article, we consider the problem of learning a binary concept in the absence of noise. We describe a formalism for active concept learning called selective sampling and show how it may be approximately implemented by a neural network. In selective sampling, a learner receives distribution information from the environment and queries an oracle on parts of the domain it considers “useful.” We test our implementation, called an SG-network, on three domains and observe significant improvement in generalization.