Linear projection method based on information theoretic learning
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part III
A regularized correntropy framework for robust pattern recognition
Neural Computation
Information-theoretic approaches to SVM feature selection for metagenome read classification
Computational Biology and Chemistry
IDEAL'12 Proceedings of the 13th international conference on Intelligent Data Engineering and Automated Learning
Information theoretic learning for pixel-based visual agents
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part VI
Comparative study on information theoretic clustering and classical clustering algorithms
ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part II
Fuzzy Sets and Systems
Learning theory approach to minimum error entropy criterion
The Journal of Machine Learning Research
Wireless Personal Communications: An International Journal
Multi-task averaging via task clustering
SIMBAD'13 Proceedings of the Second international conference on Similarity-Based Pattern Recognition
Kernel minimum error entropy algorithm
Neurocomputing
Representative cross information potential clustering
Pattern Recognition Letters
Regularized discriminant entropy analysis
Pattern Recognition
Hi-index | 0.00 |
This book presents the first cohesive treatment of Information Theoretic Learning (ITL) algorithms to adapt linear or nonlinear learning machines both in supervised or unsupervised paradigms. ITL is a framework where the conventional concepts of second order statistics (covariance, L2 distances, correlation functions) are substituted by scalars and functions with information theoretic underpinnings, respectively entropy, mutual information and correntropy. ITL quantifies the stochastic structure of the data beyond second order statistics for improved performance without using full-blown Bayesian approaches that require a much larger computational cost. This is possible because of a non-parametric estimator of Renyis quadratic entropy that is only a function of pairwise differences between samples. The book compares the performance of ITL algorithms with the second order counterparts in many engineering and machine learning applications. Students, practitioners and researchers interested in statistical signal processing, computational intelligence, and machine learning will find in this book the theory to understand the basics, the algorithms to implement applications, and exciting but still unexplored leads that will provide fertile ground for future research.