Neural network simulation on shared-memory vector multiprocessors

  • Authors:
  • C.-J. Wang;C.-H. Wu;S. Sivasindaram

  • Affiliations:
  • Department of Electrical and Computer Engineering, University of Colorado, Colorado Springs, CO;Department of Electrical and Computer Engineering, University of Colorado, Colorado Springs, CO;Department of Electrical and Computer Engineering, University of Colorado, Colorado Springs, CO

  • Venue:
  • Proceedings of the 1989 ACM/IEEE conference on Supercomputing
  • Year:
  • 1989

Quantified Score

Hi-index 0.00

Visualization

Abstract

We simulate three neural networks on a vector multiprocrssor. The training time can be reduced significantly especially when the training data size is large. These three neural networks are: 1) the feedforward network, 2) the recurrent network and 3) the Hopfield network. The training algorithms are programmed in such a way to best utilize 1) the inherent parallelism in neural computing, and 2) the vector and concurrent operations available on the parallel machine. To prove the correctness of parallelized training algorithms, each neural network is trained to perform a specific function. The feedforward network is trained to perform the Fourier transform, the recurrent network is trained to predict the solution of a delay differential equation, the Hopfield network is trained to solve the traveling salesman problem. The machine we experiment with is the Alliant FX/80.