Accelerated Gradient Learning Algorithm for Neural Network Weights Update

  • Authors:
  • Zeljko Hocenski;Mladen Antunovic;Damir Filko

  • Affiliations:
  • Faculty of Electrical Engineering, University J.J. Strossmayer, Osijek, Croatia 31000;Faculty of Electrical Engineering, University J.J. Strossmayer, Osijek, Croatia 31000;Faculty of Electrical Engineering, University J.J. Strossmayer, Osijek, Croatia 31000

  • Venue:
  • KES '08 Proceedings of the 12th international conference on Knowledge-Based Intelligent Information and Engineering Systems, Part I
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

This work proposes decomposition of gradient learning algorithm for neural network weights update. Decomposition enables parallel execution convenient for implementation on computer grid. Improvements are reflected in accelerated learning rate which may be essential for time critical decision processes. Proposed solution is tested and verified on MLP neural network case study, varying a wide range of parameters, such as number of inputs/outputs, length of input/output data, number of neurons and layers. Experimental results show time savings in multiple thread execution.