Parallel Neural Learning by Iteratively Adjusting Error Thresholds

  • Authors:
  • Affiliations:
  • Venue:
  • ICPADS '98 Proceedings of the 1998 International Conference on Parallel and Distributed Systems
  • Year:
  • 1998

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we first propose a modified back-propagation learning algorithm that incrementally decreases the error threshold by half in order to process training instances with large weight changes as quickly as possible. This modified back-propagation learning algorithm is then parallelized using the single-channel broadcast communication model to n processors, where n is the number of training instances. Finally, the parallel back-propagation learning algorithm is modified for execution on a bounded number of processors to cope with real-world conditions.