Computing with structured connectionist networks
Communications of the ACM
Mapping neural networks onto message-passing multicomputers
Journal of Parallel and Distributed Computing - Neural Computing
A unified systolic architecture for artificial neural networks
Journal of Parallel and Distributed Computing - Neural Computing
PVM: a framework for parallel distributed computing
Concurrency: Practice and Experience
Parallel simulation of multilayered neural networks on distributed-memory multiprocessors
Microprocessing and Microprogramming
Introduction to parallel algorithms and architectures: array, trees, hypercubes
Introduction to parallel algorithms and architectures: array, trees, hypercubes
Algorithmic mapping of neural network Models onto Parallel SIMD Machines
IEEE Transactions on Computers - Special issue on artificial neural networks
Optimal mapping of neural-network learning on message-passing multicomputers
Journal of Parallel and Distributed Computing - Special issue on neural computing on massively parallel processing
Digital neural networks
A Performance Model for Multilayer Neural Networks in Linear Arrays
IEEE Transactions on Parallel and Distributed Systems
Algorithmic Mapping of Feedforward Neural Networks onto Multiple Bus Systems
IEEE Transactions on Parallel and Distributed Systems
Optimistic parallel simulation over a network of workstations
Proceedings of the 31st conference on Winter simulation: Simulation---a bridge to the future - Volume 2
Mapping of neural network models onto systolic arrays
Journal of Parallel and Distributed Computing
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Scheduling Divisible Loads in Parallel and Distributed Systems
Scheduling Divisible Loads in Parallel and Distributed Systems
Parallel Architectures for Artificial Neural Networks: Paradigms and Implementations
Parallel Architectures for Artificial Neural Networks: Paradigms and Implementations
IEEE Transactions on Parallel and Distributed Systems
Learning to Classify Parallel Input/Output Access Patterns
IEEE Transactions on Parallel and Distributed Systems
Efficient mapping of backpropagation algorithm onto a network ofworkstations
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
A highly efficient implementation of a backpropagation learning algorithm using matrix ISA
Journal of Parallel and Distributed Computing
A new load distribution strategy for linear network with communication delays
Mathematics and Computers in Simulation
Modeling and simulation of two-leaf semi-rotary VAWT
LSMS/ICSEE'10 Proceedings of the 2010 international conference on Life system modeling and and intelligent computing, and 2010 international conference on Intelligent computing for sustainable energy and environment: Part I
A propagation strategy implemented in communicative environment
KES'05 Proceedings of the 9th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part II
Parallel and local learning for fast probabilistic neural networks in scalable data mining
Proceedings of the 6th Balkan Conference in Informatics
Hi-index | 0.00 |
This paper presents an efficient mapping scheme for the multilayer perceptron (MLP) network trained using back-propagation (BP) algorithm on network of workstations (NOWs). Hybrid partitioning (HP) scheme is used to partition the network and each partition is mapped on to processors in NOWs. We derive the processing time and memory space required to implement the parallel BP algorithm in NOWs. The performance parameters like speed-up and space reduction factor are evaluated for the HP scheme and it is compared with earlier work involving vertical partitioning (VP) scheme for mapping the MLP on NOWs. The performance of the HP scheme is evaluated by solving optical character recognition (OCR) problem in a network of ALPHA machines. The analytical and experimental performance shows that the proposed parallel algorithm has better speed-up, less communication time, and better space reduction factor than the earlier algorithm. This paper also presents a simple and efficient static mapping scheme on heterogeneous system. Using divisible load scheduling theory, a closed-form expression for number of neurons assigned to each processor in the NOW is obtained. Analytical and experimental results for static mapping problem on NOWs are also presented.