Global convergence of Oja's PCA learning algorithm with a non-zero-approaching adaptive learning rate

  • Authors:
  • Jian Cheng Lv;Zhang Yi;K. K. Tan

  • Affiliations:
  • Computational Intelligence Laboratory, School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China;Computational Intelligence Laboratory, School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China;Department of Electrical and Computer Engineering, National University of Singapore, 4 Engineering Drive 3, Singapore 117576, Singapore

  • Venue:
  • Theoretical Computer Science
  • Year:
  • 2006

Quantified Score

Hi-index 5.23

Visualization

Abstract

A non-zero-approaching adaptive learning rate is proposed to guarantee the global convergence of Oja's principal component analysis (PCA) learning algorithm. Most of the existing adaptive learning rates for Oja's PCA learning algorithm are required to approach zero as the learning step increases. However, this is not practical in many applications due to the computational round-off limitations and tracking requirements. The proposed adaptive learning rate overcomes this shortcoming. The learning rate converges to a positive constant, thus it increases the evolution rate as the learning step increases. This is different from learning rates which approach zero which slow the convergence considerably and increasingly with time. Rigorous mathematical proofs for global convergence of Oja's algorithm with the proposed learning rate are given in detail via studying the convergence of an equivalent deterministic discrete time (DDT) system. Extensive simulations are carried out to illustrate and verify the theory derived. Simulation results show that this adaptive learning rate is more suitable for Oja's PCA algorithm to be used in an online learning situation.