Information theoretic learning with adaptive kernels

  • Authors:
  • Abhishek Singh;José C. Príncipe

  • Affiliations:
  • Computational NeuroEngineering Laboratory, NEB 486, Bldg #33, P.O. Box 116130, University of Florida, Gainesville, FL 32611, USA;Computational NeuroEngineering Laboratory, NEB 486, Bldg #33, P.O. Box 116130, University of Florida, Gainesville, FL 32611, USA

  • Venue:
  • Signal Processing
  • Year:
  • 2011

Quantified Score

Hi-index 0.08

Visualization

Abstract

This paper presents an online algorithm for adapting the kernel width that is a free parameter in information theoretic cost functions using Renyi's entropy. This kernel computes the interactions between the error samples and essentially controls the nature of the performance surface over which the parameters of the system adapt. Since the error in an adaptive system is non-stationary during training, a fixed value of the kernel width may affect the adaptation dynamics and even compromise the location of the global optimum in parameter space. The proposed online algorithm for adapting the kernel width is derived from first principles and minimizes the Kullback-Leibler divergence between the estimated error density and the true density. We characterize the performance of this novel approach with simulations of linear and nonlinear systems training, using the minimum error entropy criterion with the proposed adaptive kernel algorithm. We conclude that adapting the kernel width improves the rate of convergence of the parameters, and decouples the convergence rate and misadjustment of the filter weights.