The connection machine
Hypernet: A communication-efficient architecture for constructing massively parallel computers
IEEE Transactions on Computers
Communications of the ACM
Computing with structured connectionist networks
Communications of the ACM
Multicomputer networks: message-based parallel processing
Multicomputer networks: message-based parallel processing
Parallel processing for super-computers and artificial intelligence
Parallel processing for super-computers and artificial intelligence
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
The cube-connected cycles: a versatile network for parallel computation
Communications of the ACM
The cognitive architecture project
ACM SIGARCH Computer Architecture News
Doubly twisted torus networks for VLSI processor arrays
ISCA '81 Proceedings of the 8th annual symposium on Computer Architecture
X-Tree: A tree structured multi-processor computer architecture
ISCA '78 Proceedings of the 5th annual symposium on Computer architecture
Parallel algorithm for solving linear programming problem under conditions of incomplete data
Automation and Remote Control
Hi-index | 0.00 |
Connectionist models such as artificial neural systems, offer an intrinsically concurrent computational paradigm. We investigate the architectural requirements for efficiently simulating large neural networks on a multicomputer system with thousands of fine-grained processors and distributed memory. First, models for characterizing the structure of a neural network and the function of individual cells are developed. These models provide guidelines for efficiently mapping the network onto multicomputer topologies such as the hypercube, hypernet and torus. They are further used to estimate the amount of interprocessor communication bandwidth required, and the number of processors needed to meet a particular cost/performance goal. Design issues such as memory organization and the effect of VLSI technology are also considered.