Simulating Artificial Neural Networks on Parallel Architectures
Computer - Special issue: neural computing: companion issue to Spring 1996 IEEE Computational Science & Engineering
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Computer Vision
A prototype automated dental identification system (ADIS)
dg.o '03 Proceedings of the 2003 annual national conference on Digital government research
A neural network system for matching dental radiographs
Pattern Recognition
Hi-index | 0.00 |
This paper addresses the problem of developing efficient parallel algorithms for the training procedure of a neural network-based Fingerprint Image Comparison (FIC) system. The target architecture is assumed to be a coarse-grain distributed-memory parallel architecture. Two types of parallelism--node parallelism and training set parallelism (TSP)--are investigated. Theoretical analysis and experimental results show that node parallelism has low speedup and poor scalability, while TSP proves to have the best speedup performance. TSP, however, is amenable to a slow convergence rate. To reduce this effect, a modified training set parallel algorithm using weighted contributions of synaptic connections is proposed. Experimental results show that this algorithm provides a fast convergence rate while keeping the best speedup performance obtained. The combination of TSP with node parallelism is also investigated. A good performance is achieved by this approach. This provides better scalability with the trade-off of a slight decrease in speedup. The above algorithms are implemented on a 32-node CM-5.