Back propagation with randomized cost function for training neural networks

  • Authors:
  • H. A. Babri;Y. Q. Chen;Kamran Ahsan

  • Affiliations:
  • Lahore University of Management Sciences, Computer Science Department, DHA Lahore, Pakistan;Nanyang Technology University, School of EEE, Singapore;Stanford University, Department of Management Science and Engineering, Terman Engineering Center, Stanford, California

  • Venue:
  • RSFDGrC'03 Proceedings of the 9th international conference on Rough sets, fuzzy sets, data mining, and granular computing
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

A novel method to improve both the generalization and convergence performance of the back propagation algorithm (BP) by using multiple cost functions with a randomizing scheme is proposed in this paper. Under certain conditions, the randomized technique will converge to the global minimum with probability one. Experimental results on benchmark Encoder-Decoder problems and the NC2 classification problem show that the method is effective in enhancing BP's convergence and generalization performance