A short proof of the posterior probability property of classifier neural networks

  • Authors:
  • Raúl Rojas

  • Affiliations:
  • Institut für Informatik, Freie Universität Berlin, Takustr. 9, 14195 Berlin, Germany

  • Venue:
  • Neural Computation
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

It is now well known that neural classifiers can learn to compute a posteriori probabilities of classes in input space. This note offers a shorter proof than the traditional ones. Only one class has to be considered and straightforward minimization of the error function provides the main result. The method can be extended to any kind of differentiable error function. We also present a simple visual proof of the same theorem, which stresses the fact that the network must be perfectly trained and have enough plasticity.