Convergence analysis of convex incremental neural networks

  • Authors:
  • Lei Chen;Hung Keng Pung

  • Affiliations:
  • Network Systems and Service Lab., Department of Computer Science, National University of Singapore, Kent Ridge, Singapore;Network Systems and Service Lab., Department of Computer Science, National University of Singapore, Kent Ridge, Singapore

  • Venue:
  • Annals of Mathematics and Artificial Intelligence
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recently, a convex incremental algorithm (CI-ELM) has been proposed in Huang and Chen (Neurocomputing 70:3056---3062, 2007), which randomly chooses hidden neurons and then analytically determines the output weights connecting with the hidden layer and the output layer. Though hidden neurons are generated randomly, the network constructed by CI-ELM is still based on the principle of universal approximation. The random approximation theory breaks through the limitation of most conventional theories, eliminating the need for tuning hidden neurons. However, due to the random characteristic, some of the neurons contribute little to decrease the residual error, which eventually increase the complexity and computation of neural networks. Thus, CI-ELM cannot precisely give out its convergence rate. Based on Lee's results (Lee et al., IEEE Trans Inf Theory 42(6):2118---2132, 1996), we first show the convergence rate of a maximum CI-ELM, and then systematically analyze the convergence rate of an enhanced CI-ELM. Different from CI-ELM, the hidden neurons of the two algorithms are chosen by following the maximum or optimality principle under the same structure as CI-ELM. Further, the proof process also demonstrates that our algorithms achieve smaller residual errors than CI-ELM. Since the proposed neural networks remove these "useless" neurons, they improve the efficiency of neural networks. The experimental results on benchmark regression problems will support our conclusions.