Mapping neural networks onto message-passing multicomputers
Journal of Parallel and Distributed Computing - Neural Computing
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Introduction to parallel computing: design and analysis of algorithms
Introduction to parallel computing: design and analysis of algorithms
Parallel neural computation based on algebraic partitioning
Parallel algorithms
Parallel neural computing based on network duplicating
Parallel algorithms
Hierarchical mixtures of experts and the EM algorithm
Neural Computation
International Journal of Parallel Programming
A computational study of the focus-of-attention EM-ML algorithm for PET reconstruction
Parallel Computing - Special double issue on biomedical applications
Multiprocessor simulation of neural networks
The handbook of brain theory and neural networks
IEEE Transactions on Parallel and Distributed Systems
Graph partitioning models for parallel computing
Parallel Computing - Special issue on graph partioning and parallel computing
Parallel Implementations of Backpropagation Neural Networks on Transputers: A Study of Training Set Parallelism
IEEE Transactions on Parallel and Distributed Systems
Optimal Speedup Conditions for a Parallel Back-Propagation Algorithm
CONPAR '92/ VAPP V Proceedings of the Second Joint International Conference on Vector and Parallel Processing: Parallel Processing
Efficient mapping of backpropagation algorithm onto a network ofworkstations
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
A distributed and multithreaded neural event driven simulation framework
PDCN'06 Proceedings of the 24th IASTED international conference on Parallel and distributed computing and networks
Parallel Approach for Ensemble Learning with Locally Coupled Neural Networks
Neural Processing Letters
A parallel evolving algorithm for flexible neural tree
Parallel Computing
OTM'05 Proceedings of the 2005 OTM Confederated international conference on On the Move to Meaningful Internet Systems
Hi-index | 0.00 |
Efficient parallel learning algorithms are proposed for training a powerful modular neural network, the hierarchical mixture of experts (HME). Parallelizations are based on the concept of modular parallelism, i.e. parallel execution of network modules. From modeling the speed-up as a function of the number of processors and the number of training examples, several improvements are derived, such as pipelining the training examples by packets. Compared to experimental measurements, theoretical models are accurate. For regular topologies, an analysis of the models shows that the parallel algorithms are highly scalable when the size of the experts grows from linear units to multi-layer perceptrons (MLPs). These results are confirmed experimentally, achieving near-linear speedups for HME-MLP. Although this work can be viewed as a case study in the parallelization of HME neural networks, both algorithms and theoretical models can be expanded to different learning rules or less regular tree architectures.