Meta-adaptation: neurons that change their mode

  • Authors:
  • Dominic Palmer-Brown;Miao Kang;Sin Wee Lee

  • Affiliations:
  • School of Computing and Technology, University of East London, UK;School of Computing and Technology, University of East London, UK;School of Computing and Technology, University of East London, UK

  • Venue:
  • NN'08 Proceedings of the 9th WSEAS International Conference on Neural Networks
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper will explore the integration of learning modes into a single neural network structure in which layers of neurons and even individual neurons adopt different modes. There are several reasons to explore modal learning in neural networks. One motivation is to overcome the inherent limitations of any given mode (for example some modes memorise specific features, others average across features, and both approaches may be relevant according to the circumstances); another is inspiration from neuroscience, cognitive science and human learning, where it is impossible to build a serious model without consideration of multiple modes; and a third reason is non-stationary input data, or time-variant learning objectives, where the required mode is a function of time. Several modal learning ideas will be presented: The Snap-Drift Neural Network (SDNN) which toggles its learning between two modes, either unsupervised or guided by performance feedback (reinforcement); a general approach to swapping between several learning modes in real-time; and an adaptive function neural network (ADFUNN), in which adaptation applies simultaneously to both the weights and the individual neuron activation functions.