Original Contribution: Convergence and divergence in neural networks: Processing of chaos and biological analogy

  • Authors:
  • George J. Mpitsos;Robert M. Burton, Jr.

  • Affiliations:
  • -;-

  • Venue:
  • Neural Networks
  • Year:
  • 1992

Quantified Score

Hi-index 0.00

Visualization

Abstract

We have used simple neural networks as models to examine two interrelated biological questions: What are the functional implications of the converging and diverging projections that profusely interconnect neurons? How do the dynamical features of the input signal affect the responses of such networks? In this paper we examine subsets of these questions by using error-back propagation learning as the network response in question. The dynamics of the input signals was suggested by our previous biological findings. These signals consisted of chaotic series generated by the recursive logistic equation, x"n"+"1 = 3.95(1 - x"n)X"n, random noise, and sine functions. The input signals were also sent to a variety of teacher functions that controlled the type of computations networks were required to do, Single and double hidden-layer networks were used to examine, respectively, divergence and a combination of divergence and convergence. Networks containing single and multiple input/output units were used to determine how the networks learned when they were required to perform single or multiple tasks on their input signals. Back propagation was performed ''on-line'' in each training trial, and all processing was analog. Thereafter, the network units were examined ''neurophysiologically'' by selectively removing individual synapses to determine their effect on system error. The findings show that the dynamics of input signals strongly affect the learning process. Chaotic point processes, analogous to spike trains in biological systems, provide excellent signals on which networks can perform a variety of computational tasks. Continuous functions that vary within bounds, whether chaotic or not, impose some limitations. Differences in convergence and divergence determine the relative strength of the trained network connections. Many weak synapses, and even some of the strongest ones, are multifunctional in that they have approximately equal effects in all learned tasks, as has been observed biologically. Training sets all synapses to optimal levels, and many units are automatically given task-specific assignments. But despite their optimal settings, many synapses produce relatively weak effects, particularly in networks that combine convergence and divergence within the same layer. Such findings of ''lazy'' synapses suggest a re-examination of the role of weak synapses in biological systems. Of equal biological importance is the finding that networks containing only trainable synapses are severely limited computationally unless trainable thresholds are included. Network capabilities are also severely limited by relatively small increases in the number of network units. Some of these findings are immediately addressable from the code of the back propagation algorithm itself. Others, such as limitations imposed by increasing network size, need to be viewed through error surfaces generated by the trial-to-trial connection changes that occur during learning. We discuss the biological implications of the findings.