Magnified gradient function to improve first-order gradient-based learning algorithms

  • Authors:
  • Sin-Chun Ng;Chi-Chung Cheung;Andrew kwok-fai Lui;Shensheng Xu

  • Affiliations:
  • School of Science and Technology, The Open University of Hong Kong, Homantin, Hong Kong;Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, Hunghom, Hong Kong;School of Science and Technology, The Open University of Hong Kong, Homantin, Hong Kong;School of Science and Technology, The Open University of Hong Kong, Homantin, Hong Kong

  • Venue:
  • ISNN'12 Proceedings of the 9th international conference on Advances in Neural Networks - Volume Part I
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a new approach to improve the performance of existing first-order gradient-based fast learning algorithms in terms of speed and global convergence capability. The idea is to magnify the gradient terms of the activation function so that fast learning speed and global convergence can be achieved. The approach can be applied to existing gradient-based algorithms. Simulation results show that this approach can significantly speed up the convergence rate and increase the global convergence capability of existing popular first-order gradient-based fast learning algorithms for multi-layer feed-forward neural networks.