Supervised Learning Without Output Labels

  • Authors:
  • Ramesh R. Sarukkai

  • Affiliations:
  • -

  • Venue:
  • Supervised Learning Without Output Labels
  • Year:
  • 1994

Quantified Score

Hi-index 0.00

Visualization

Abstract

Supervised, neural network, learning algorithms have proven very successful at solving a variety of learning problems. However, they suffer from a common problem of requiring explicit output labels. This requirement makes such algorithms implausible as biological models. In this paper, it is shown that pattern classification can be achieved in a multi-layered, feed-forward neural network, without requiring explicit output labels, by a process of supervised self-organization. The class projection is achieved by optimizing appropriate within-class uniformity and between-class discernibility criteria. The mapping function and the class labels are developed together iteratively using the derived self-organizing back-propagation algorithm. The ability of the self-organizing network to generalize on unseen data is also experimentally evaluated on real data sets, and compares favorably with the traditional labeled supervision with neural networks. However, interesting features emerge out of the proposed self-organizing supervision that are absent in conventional approaches. The further implications of self-organizing supervision with neural networks are also discussed.