Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
Elements of information theory
Elements of information theory
Matrix computations (3rd ed.)
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Feature extraction by non parametric mutual information maximization
The Journal of Machine Learning Research
Spectral Grouping Using the Nyström Method
IEEE Transactions on Pattern Analysis and Machine Intelligence
Lower and Upper Bounds for Misclassification Probability Based on Renyi's Information
Journal of VLSI Signal Processing Systems
ICA using spacings estimates of entropy
The Journal of Machine Learning Research
Generalized Discriminant Analysis Using a Kernel Approach
Neural Computation
Linear Feature Extractors Based on Mutual Information
ICPR '96 Proceedings of the 13th International Conference on Pattern Recognition - Volume 2
On the Choice of Smoothing Parameters for Parzen Estimators of Probability Density Functions
IEEE Transactions on Computers
An error-entropy minimization algorithm for supervised training ofnonlinear adaptive systems
IEEE Transactions on Signal Processing
Probability of error, equivocation, and the Chernoff bound
IEEE Transactions on Information Theory
Gradient-based manipulation of nonparametric entropy estimates
IEEE Transactions on Neural Networks
Using mutual information for selecting features in supervised neural net learning
IEEE Transactions on Neural Networks
Exploiting quadratic mutual information for discriminant analysis
SETN'12 Proceedings of the 7th Hellenic conference on Artificial Intelligence: theories and applications
Hi-index | 0.01 |
Determining optimal subspace projections that can maintain task-relevant information in the data is an important problem in machine learning and pattern recognition. In this paper, we propose a nonparametric nonlinear subspace projection technique that maintains class separability maximally under the Shannon mutual information (MI) criterion. Employing kernel density estimates for nonparametric estimation of MI makes possible an interesting marriage of kernel density estimation-based information theoretic methods and kernel machines, which have the ability to determine nonparametric nonlinear solutions for difficult problems in machine learning. Significant computational savings are achieved by translating the definition of the desired projection into the kernel-induced feature space, which leads to obtain analytical solution.