Simple modification of oja rule limits l1-norm of weight vector and leads to sparse connectivity

  • Authors:
  • Vladimir Aparin

  • Affiliations:
  • -

  • Venue:
  • Neural Computation
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

This letter describes a simple modification of the Oja learning rule, which asymptotically constrains the L1-norm of an input weight vector instead of the L2-norm as in the original rule. This constraining is local as opposed to commonly used instant normalizations, which require the knowledge of all input weights of a neuron to update each one of them individually. The proposed rule converges to a weight vector that is sparser (has more zero weights) than the vector learned by the original Oja rule with or without the zero bound, which could explain the developmental synaptic pruning.