Neural statics and dynamics

  • Authors:
  • Robert L. Fry

  • Affiliations:
  • Applied Physics Laboratory, The Johns Hopkins University, 11100 Johns Hopkins Road, Laurel, MD 20723-6099, USA

  • Venue:
  • Neurocomputing
  • Year:
  • 2005

Quantified Score

Hi-index 0.01

Visualization

Abstract

A formal theory of systems was proposed previously as providing a quantitative basis for neural computation. This theory dictated the architectural aspects of a pyramidal-neuron system model including its operation, adaptation, and most importantly, its computational objective. The principal result was a perceptron architecture that, through adaptation, learns to ask a specific space-time question answered by a subset of the space-time binary codes that it can observe. Each code is rendered biologically by spatial and temporal arrangement of action potentials. Decisions made whether the learned question is answered or not are based on a logarithmic form of Bayes' Theorem which induces the need for a linear weighted superposition of induced synaptic effects. The computational objective of the system is simply to maximize its information throughput. The present paper completes prior work by formalizing the Hamiltonian for the single-neuron system and by providing an expression for its partition function. Besides explaining previous work, new findings suggest the presence of a computational temperature T above which the system must operate to avoid ''freezing'' upon which useful computation becomes impossible. T serves at least two important functions: (1) it provides a computational degree of freedom to the neuron enabling the realization of probabilistic Bayesian decisioning, and (2) it can be varied by the neuron so as to maximize its throughput capacity in the presence of measurement noise.