VLSI implementation of a neural network memory with several hundreds of neurons
AIP Conference Proceedings 151 on Neural Networks for Computing
The connection machine
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
AFL-1: A Programming Language for Massively Concurrent Computers
AFL-1: A Programming Language for Massively Concurrent Computers
Hi-index | 0.00 |
Connectionist networks are powerful techniques, inspired by the parallel architecture of the brain, for discovering intrinsic structures in data. However, they are not well suited for implementation on serial computers. In this paper, we discuss the first implementation of a connectionist learning algorithm, error back-propagation, on a fine-grained parallel computer, the Connection Machine. As an example of how the system can be used, we present a parallel implementation of NETtalk, a connectionist network that learns the mapping from English text to the pronunciation of that text. Currently, networks containing up to 16 million links can be simulated on the Connection Machine at speeds nearly twice that of the Cray-2. We found the major impediment to further speed-up to be the communications between processors, and not processor speed per se. We believe that the advantage for parallel computers will become even clearer as developments in parallel computing continue.