AVLR-EBP: A Variable Step Size Approach to Speed-up the Convergence of Error Back-Propagation Algorithm

  • Authors:
  • Arman Didandeh;Nima Mirbakhsh;Ali Amiri;Mahmood Fathy

  • Affiliations:
  • Department of Computer Science and IT, Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan, Iran;Department of Computer Science and IT, Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan, Iran;Computer Engineering Group, Engineering Department, Zanjan University, Zanjan, Iran;Computer Engineering Department, Iran University of Science and Technology, Tehran, Iran

  • Venue:
  • Neural Processing Letters
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

A critical issue of Neural Network based large-scale data mining algorithms is how to speed up their learning algorithm. This problem is particularly challenging for Error Back-Propagation (EBP) algorithm in Multi-Layered Perceptron (MLP) Neural Networks due to their significant applications in many scientific and engineering problems. In this paper, we propose an Adaptive Variable Learning Rate EBP algorithm to attack the challenging problem of reducing the convergence time in an EBP algorithm, aiming to have a high-speed convergence in comparison with standard EBP algorithm. The idea is inspired from adaptive filtering, which leaded us into two semi-similar methods of calculating the learning rate. Mathematical analysis of AVLR-EBP algorithm confirms its convergence property. The AVLR-EBP algorithm is utilized for data classification applications. Simulation results on many well-known data sets shall demonstrate that this algorithm reaches to a considerable reduction in convergence time in comparison to the standard EBP algorithm. The proposed algorithm, in classifying the IRIS, Wine, Breast Cancer, Semeion and SPECT Heart datasets shows a reduction of the learning epochs relative to the standard EBP algorithm.