Adaptive filter theory
An application of the principle of maximum information preservation to linear systems
Advances in neural information processing systems 1
Elements of information theory
Elements of information theory
Principal component neural networks: theory and applications
Principal component neural networks: theory and applications
Are Multilayer Perceptrons Adequate for Pattern Recognition and Verification?
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
An Information-Theoretic Approach to Neural Computing
An Information-Theoretic Approach to Neural Computing
Nonlinear extensions to the minimum average correlation energy filter
Nonlinear extensions to the minimum average correlation energy filter
Energy, entropy and information potential for neural computation
Energy, entropy and information potential for neural computation
A Mathematical Theory of Communication
A Mathematical Theory of Communication
Statistical information approaches for the modelling of the epileptic brain
Computational Statistics & Data Analysis
Vector quantization using information theoretic concepts
Natural Computing: an international journal
An analysis of entropy estimators for blind source separation
Signal Processing
Feature Extraction Using Information-Theoretic Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
From error probability to information theoretic (multi-modal) signal processing
Signal Processing - Special issue: Information theoretic signal processing
Some Equivalences between Kernel Methods and Information Theoretic Methods
Journal of VLSI Signal Processing Systems
An Information-Theoretic Approach to Stochastic Materials Modeling
Computing in Science and Engineering
EURASIP Journal on Applied Signal Processing
Computational Intelligence and Neuroscience - EEG/MEG Signal Processing
Information theoretic combination of classifiers with application to AdaBoost
MCS'07 Proceedings of the 7th international conference on Multiple classifier systems
Derivations of normalized mutual information in binary classifications
FSKD'09 Proceedings of the 6th international conference on Fuzzy systems and knowledge discovery - Volume 1
Information theoretic combination of pattern classifiers
Pattern Recognition
Mean-square convergence analysis of ADALINE training with minimum error entropy criterion
IEEE Transactions on Neural Networks
Generation of comprehensible representations by supposed maximum information
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part II
Δ-Entropy: Definition, properties and applications in system identification with quantized data
Information Sciences: an International Journal
A regularized correntropy framework for robust pattern recognition
Neural Computation
Electrostatic field framework for supervised and semi-supervised learning from incomplete data
Natural Computing: an international journal
Information, Divergence and Risk for Binary Experiments
The Journal of Machine Learning Research
Gravitation based classification
Information Sciences: an International Journal
Information-theoretic clustering: A representative and evolutionary approach
Expert Systems with Applications: An International Journal
Representative cross information potential clustering
Pattern Recognition Letters
Assessing the variability in respiratory acoustic thoracic imaging (RATHI)
Computers in Biology and Medicine
Hi-index | 0.00 |
This paper discusses a framework for learning based on information theoretic criteria. A novel algorithm based on Renyi's quadratic entropy is used to train, directly from a data set, linear or nonlinear mappers for entropy maximization or minimization. We provide an intriguing analogy between the computation and an information potential measuring the interactions among the data samples. We also propose two approximations to the Kulback-Leibler divergence based on quadratic distances (Cauchy-Schwartz inequality and Euclidean distance). These distances can still be computed using the information potential. We test the newly proposed distances in blind source separation (unsupervised learning) and in feature extraction for classification (supervised learning). In blind source separation our algorithm is capable of separating instantaneously mixed sources, and for classification the performance of our classifier is comparable to the support vector machines (SVMs).