Learning by asymmetric parallel boltzmann machines

  • Authors:
  • Bruno Apolloni;Diego de Falco

  • Affiliations:
  • Dipartimento di Scienze dell' Informazione, Universit di Milano, I-20133 Milano, Italy;Dipartimento di Matematica, Politecnico di Milano, I-20133 Milano, Italy

  • Venue:
  • Neural Computation
  • Year:
  • 1991

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider the Little, Shaw, Vasudevan model as a parallel asymmetric Boltzmann machine, in the sense that we extend to this model the entropic learning rule first studied by Ackley, Hinton, and Sejnowski in the case of a sequentially activated network with symmetric synaptic matrix. The resulting Hebbian learning rule for the parallel asymmetric model draws the signal for the updating of synaptic weights from time averages of the discrepancy between expected and actual transitions along the past history of the network. As we work without the hypothesis of symmetry of the weights, we can include in our analysis also feedforward networks, for which the entropic learning rule turns out to be complementary to the error backpropagation rule, in that it rewards the correct behavior instead of penalizing the wrong answers.