Error Weighting in Artificial Neural Networks Learning Interpreted as a Metaplasticity Model

  • Authors:
  • Diego Andina;Aleksandar Jevtić;Alexis Marcano;J. M. Barrón Adame

  • Affiliations:
  • Universidad Politécnica de Madrid, Spain;Universidad Politécnica de Madrid, Spain;Universidad Politécnica de Madrid, Spain;Universidad de Guanajuato, Mexico

  • Venue:
  • IWINAC '07 Proceedings of the 2nd international work-conference on The Interplay Between Natural and Artificial Computation, Part I: Bio-inspired Modeling of Cognitive Tasks
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many Artificial Neural Networks design algorithms or learning methods imply the minimization of an error objective function. During learning, weight values are updated following a strategy that tends to minimize the final mean error in the Network performance. Weight values are classically seen as a representation of the synaptic weights in biological neurons and their ability to change its value could be interpreted as artificial plasticity inspired by this biological property of neurons. In such a way, metaplasticity is interpreted in this paper as the ability to change the efficiency of artificial plasticity giving more relevance to weight updating of less frequent activations and resting relevance to frequent ones. Modeling this interpretation in the training phase, the hypothesis of an improved training is tested in the Multilayer Perceptron with Backpropagation case. The results show a much more efficient training maintaining the Artificial Neural Network performance.