Transputers and neural networks: an analysis of implementation constraints and performance

  • Authors:
  • J. M.J. Murre

  • Affiliations:
  • Unit. of Exp. & Theor. Psychol., Leiden Univ.

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1993

Quantified Score

Hi-index 0.00

Visualization

Abstract

A performance analysis is presented that focuses on the achievable speedup of a neural network implementation and on the optimal size of a processor network (transputers or multicomputers that communicate in a comparable manner). For fully and randomly connected neural networks the topology of the processor network can only have a small, constant effect on the iteration time. With randomly connected neural networks, even severely limiting node fan-in has only a negligible effect on decreasing the communication overhead. The class of modular neural networks is studied as a separate case which is shown to have better implementation characteristics. On the basis of implementation constraints, it is argued that randomly connected neural networks cannot be realistic models of the brain