Parallel implementations of recurrent neural network learning

  • Authors:
  • Uroš Lotrič;Andrej Dobnikar

  • Affiliations:
  • Faculty of Computer and Information Science, University of Ljubljana, Slovenia;Faculty of Computer and Information Science, University of Ljubljana, Slovenia

  • Venue:
  • ICANNGA'09 Proceedings of the 9th international conference on Adaptive and natural computing algorithms
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Neural networks have proved to be effective in solving a wide range of problems. As problems become more and more demanding, they require larger neural networks, and the time used for learning is consequently greater. Parallel implementations of learning algorithms are therefore vital for a useful application. Implementation, however, strongly depends on the features of the learning algorithm and the underlying hardware architecture. For this experimental work a dynamic problem was chosen which implicates the use of recurrent neural networks and a learning algorithm based on the paradigm of learning automata. Two parallel implementations of the algorithm were applied - one on a computing cluster using MPI and OpenMP libraries and one on a graphics processing unit using the CUDA library. The performance of both parallel implementations justifies the development of parallel algorithms.