A family of minimum renyi's error entropy algorithm for information processing

  • Authors:
  • Jose Carlos Principe;Seungju Han

  • Affiliations:
  • University of Florida;University of Florida

  • Venue:
  • A family of minimum renyi's error entropy algorithm for information processing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Adaptive systems are self-adjusting and seek the optimum in a continuous way, thus becoming less dependent on a priori knowledge. However, the input signal statistics play an important role in selecting the appropriate cost function optimization. Recently, the error entropy criterion with nonparametric estimator for Renyi's quadratic definition has been proposed as an alternative for mean square error (MSE) in supervised adaptation by Principe, Erdogmus and coworkers. For instance, minimum error entropy (MEE) had been shown as a more robust criterion for dynamic modeling and an alternative to MSE in other supervised learning applications using nonlinear systems. The major goal of our research was to extend their work, improving the MEE algorithm and demonstrating its superior performance in many practical applications that concern adaptive signal processing. We proposed four new algorithms: minimum error entropy with self adjusting step-size (MEE-SAS), normalized minimum error entropy (NMEE), fixed-point minimum error entropy (MEE-FP) and fast minimum error entropy with fast Gauss transform (fast MEE with FGT) and improved fast Gauss transform (fast MEE with IFGT). First, MEE-SAS provides a natural "Target" that is available to automatically control the algorithm step size. We attribute the self adjusting step size property of MEE-SAS to its changing curvature as opposed to MEE which has a constant curvature. Therefore, MEE-SAS has faster speed of convergence as compared to MEE algorithm for the same misadjustment. However, in the case of a non-stationary environment, MEE-SAS loses its tracking ability due to the "flatness" of the curvature near the optimal solution. We solved this problem by proposing a switching scheme between MEE and MEE-SAS algorithms for non-stationary scenario which effectively combines the speed of MEE-SAS when far from the optimal solution with the tracking ability of MEE when near the solution. Second, NMEE, which aims at minimizing the weight change subject to the constraint of optimal information potential, performs better than MEE with respect to three major points: it is less sensitive to the input power and the kernel size, and converges faster. Third, the MEE-FP utilizes the first order optimality condition of the error entropy and the fixed-point iteration. Since this algorithm is the second order update similar to recursive least square (RLS), this is suitable to speed up convergence irrespective of the eigenvalue spread of the input correlation matrix. The original error entropy criteria estimated using Parzen windowing have higher computational complexity of O(N 2) when compared with MSE, where N is the number of samples in the training set. Therefore, the fourth algorithm is the fast MEE methods with FGT and IFGT which help alleviate this problem by accurate and efficient computation of entropy using the Hermite expansion and the Taylor expansion in O(pN), where p is the order of the expansion approximation. Although the MEE cost function is particularly applicable to nonlinear signal processing, in our research we used linear system problems to demonstrate the convergence properties of the new entropy based algorithms and to compare them with the MSE counterparts. In the application chapter we addressed the two main application domains of the proposed algorithms: linear or nonlinear model fitting in the presence of impulsive noise and nonlinear system identification.