On speeding up computation in information theoretic learning

  • Authors:
  • Sohan Seth;José C. Príncipe

  • Affiliations:
  • Computational Neuro-Engineering Laboratory, University of Florida, Gainesville, Florida;Computational Neuro-Engineering Laboratory, University of Florida, Gainesville, Florida

  • Venue:
  • IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

With the recent progress in kernel based learning methods, computation with Gram matrices has received immense attention. However, the complexity of computing the entire Gram matrix is quadratic in terms of number of samples. Therefore, a considerable amount of work has been focused on extracting relevant information from the Gram matrix without accessing all the elements. Most of these methods exploits the positive definiteness and rapidly decaying eigenstructure of the Gram matrix. Although information theoretic learning (ITL) is conceptually different from kernel based learning, several ITL estimators can be written in terms of Gram matrices. However, the difference between ITL and kernel based methods is that a few ITL estimators include a special type of matrix which is neither positive definite nor symmetric. In this paper we discuss how the techniques applied in kernel based learning can be applied to reduce computational complexity of the ITL estimators involving both Gram matrices and these other matrices.