Perceptron-based learning algorithms

  • Authors:
  • S. I. Gallant

  • Affiliations:
  • Coll. of Comput. Sci., Northeastern Univ., Boston, MA

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1990

Quantified Score

Hi-index 0.00

Visualization

Abstract

A key task for connectionist research is the development and analysis of learning algorithms. An examination is made of several supervised learning algorithms for single-cell and network models. The heart of these algorithms is the pocket algorithm, a modification of perceptron learning that makes perceptron learning well-behaved with nonseparable training data, even if the data are noisy and contradictory. Features of these algorithms include speed algorithms fast enough to handle large sets of training data; network scaling properties, i.e. network methods scale up almost as well as single-cell models when the number of inputs is increased; analytic tractability, i.e. upper bounds on classification error are derivable; online learning, i.e. some variants can learn continually, without referring to previous data; and winner-take-all groups or choice groups, i.e. algorithms can be adapted to select one out of a number of possible classifications. These learning algorithms are suitable for applications in machine learning, pattern recognition, and connectionist expert systems