The role of constraints in Hebbian learning

  • Authors:
  • Kenneth D. Miller;David J. C. MacKay

  • Affiliations:
  • -;-

  • Venue:
  • Neural Computation
  • Year:
  • 1994

Quantified Score

Hi-index 0.00

Visualization

Abstract

Models of unsupervised, correlation-based (Hebbian) synapticplasticity are typically unstable: either all synapses grow untileach reaches the maximum allowed strength, or all synapses decay tozero strength. A common method of avoiding these outcomes is to usea constraint that conserves or limits the total synaptic strengthover a cell. We study the dynamic effects of such constraints.Two methods of enforcing a constraint are distinguished,multiplicative and subtractive. For otherwise linear learningrules, multiplicative enforcement of a constraint results indynamics that converge to the principal eigenvector of the operatordetermining unconstrained synaptic development. Subtractiveenforcement, in contrast, typically leads to a final state in whichalmost all synaptic strengths reach either the maximum or minimumallowed value. This final state is often dominated by weightconfigurations other than the principal eigenvector of theunconstrained operator. Multiplicative enforcement yields a"graded" receptive field in which most mutually correlated inputsare represented, whereas subtractive enforcement yields a receptivefield that is "sharpened" to a subset of maximally correlatedinputs. If two equivalent input populations (e.g., two eyes)innervate a common target, multiplicative enforcement preventstheir segregation (ocular dominance segregation) when the twopopulations are weakly correlated; whereas subtractive enforcementallows segregation under these circumstances.These results may be used to understand constraints both overoutput cells and over input cells. A variety of rules that canimplement constrained dynamics are discussed.