Modal learning neural networks

  • Authors:
  • Dominic Palmer-Brown;Sin Wee Lee;Chrisina Draganova;Miao Kang

  • Affiliations:
  • School of Computing and Technology, University of East London, London, United Kingdom;School of Computing and Technology, University of East London, London, United Kingdom;School of Computing and Technology, University of East London, London, United Kingdom;School of Computing and Technology, University of East London, London, United Kingdom

  • Venue:
  • WSEAS Transactions on Computers
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper will explore the integration of learning modes into a single neural network structure in which layers of neurons or individual neurons adopt different modes. There are several reasons to explore modal learning. One motivation is to overcome the inherent limitations of any given mode (for example some modes memorise specific features, others average across features, and both approaches may be relevant according to the circumstances); another is inspiration from neuroscience, cognitive science and human learning, where it is impossible to build a serious model without consideration of multiple modes; and a third reason is non-stationary input data, or time-variant learning objectives, where the required mode is a function of time. Two modal learning ideas are presented: The Snap-Drift Neural Network (SDNN) which toggles its learning between two modes, is incorporated into an on-line system to provide carefully targeted guidance and feedback to students; and an adaptive function neural network (ADFUNN), in which adaptation applies simultaneously to both the weights and the individual neuron activation functions. The combination of the two modal learning methods, in the form of Snap-drift ADaptive FUnction Neural Network (SADFUNN) is then applied to optical and pen-based recognition of handwritten digits with results that demonstrate the effectiveness of the approach.