ANN-based estimator for distillation using Levenberg-Marquardt approach
Engineering Applications of Artificial Intelligence
Computational Intelligence and Neuroscience - EEG/MEG Signal Processing
Letter: A robust approach to empirical PDF estimate
Neurocomputing
Information theoretic learning with adaptive kernels
Signal Processing
Mean-square convergence analysis of ADALINE training with minimum error entropy criterion
IEEE Transactions on Neural Networks
Computers and Electrical Engineering
Hi-index | 0.00 |
Recent publications have proposed various information-theoretic learning (ITL) criteria based on Renyi's quadratic entropy with nonparametric kernel-based density estimation as alternative performance metrics for both supervised and unsupervised adaptive system training. These metrics, based on entropy and mutual information, take into account higher order statistics unlike the mean-square error (MSE) criterion. The drawback of these information-based metrics is the increased computational complexity, which underscores the importance of efficient training algorithms. In this paper, we examine familiar advanced-parameter search algorithms and propose modifications to allow training of systems with these ITL criteria. The well known algorithms tailored here for ITL include various improved gradient-descent methods, conjugate gradient approaches, and the Levenberg-Marquardt (LM) algorithm. Sample problems and metrics are presented to illustrate the computational efficiency attained by employing the proposed algorithms.