Dually Optimal Neuronal Layers: Lobe Component Analysis

  • Authors:
  • Juyang Weng;M. Luciw

  • Affiliations:
  • Dept. of Comput. Sci. & Eng., Michigan State Univ., East Lansing, MI;-

  • Venue:
  • IEEE Transactions on Autonomous Mental Development
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Development imposes great challenges. Internal ldquocorticalrdquorepresentations must be autonomously generated from interactive experiences. The eventual quality of these developed representations is of course important. Additionally, learning must be as fast as possible-to quickly derive better representation from limited experiences. Those who achieve both of these will have competitive advantages. We present a cortex-inspired theory called lobe component analysis (LCA) guided by the aforementioned dual criteria. A lobe component represents a high concentration of probability density of the neuronal input space. We explain how lobe components can achieve a dual-spatiotemporal (ldquobestrdquo and ldquofastestrdquo)-optimality, through mathematical analysis, in which we describe how lobe components plasticity can be temporally scheduled to take into account the history of observations in the best possible way. This contrasts with using only the last observation in gradient-based adaptive learning algorithms. Since they are based on two cell-centered mechanisms-Hebbian learning and lateral inhibition-lobe components develop in-place, meaning every networked neuron is individually responsible for the learning of its signal-processing characteristics within its connected network environment. There is no need for a separate learning network. We argue that in-place learning algorithms will be crucial for real-world large-size developmental applications due to their simplicity, low computational complexity, and generality. Our experimental results show that the learning speed of the LCA algorithm is drastically faster than other Hebbian-based updating methods and independent component analysis algorithms, thanks to its dual optimality, and it does not need to use any second- or higher order statistics. We also introduce the new principle of fast learning from stable representation.