Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
Elements of information theory
Elements of information theory
Complexity optimized data clustering by competitive neural networks
Neural Computation
Neural Computation
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Kernel PCA and de-noising in feature spaces
Proceedings of the 1998 conference on Advances in neural information processing systems II
The FERET Evaluation Methodology for Face-Recognition Algorithms
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Mean Shift: A Robust Approach Toward Feature Space Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Mean Shift, Mode Seeking, and Clustering
IEEE Transactions on Pattern Analysis and Machine Intelligence
Feature extraction by non parametric mutual information maximization
The Journal of Machine Learning Research
Linear Dimensionality Reduction via a Heteroscedastic Extension of LDA: The Chernoff Criterion
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Face Recognition Using Laplacianfaces
IEEE Transactions on Pattern Analysis and Machine Intelligence
The Journal of Machine Learning Research
Feature Extraction Using Information-Theoretic Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Graph Embedding and Extensions: A General Framework for Dimensionality Reduction
IEEE Transactions on Pattern Analysis and Machine Intelligence
Pattern Recognition
Robust locally linear embedding
Pattern Recognition
Information Discriminant Analysis: Feature Extraction with an Information-Theoretic Objective
IEEE Transactions on Pattern Analysis and Machine Intelligence
Discriminant Subspace Analysis: A Fukunaga-Koontz Approach
IEEE Transactions on Pattern Analysis and Machine Intelligence
Dimensionality Reduction of Clustered Data Sets
IEEE Transactions on Pattern Analysis and Machine Intelligence
Mean shift: An information theoretic perspective
Pattern Recognition Letters
Geometric Mean for Subspace Selection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Nonparametric Discriminant Analysis for Face Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Markov Random Field Modeling in Image Analysis
Markov Random Field Modeling in Image Analysis
Robust feature extraction via information theoretic learning
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Robust Discriminant Analysis Based on Nonparametric Maximum Entropy
ACML '09 Proceedings of the 1st Asian Conference on Machine Learning: Advances in Machine Learning
Graph-optimized locality preserving projections
Pattern Recognition
Kernel Entropy Component Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Information theoretic learning with adaptive kernels
Signal Processing
Information Theoretic Learning: Renyi's Entropy and Kernel Perspectives
Information Theoretic Learning: Renyi's Entropy and Kernel Perspectives
Robust kernel discriminant analysis using fuzzy memberships
Pattern Recognition
Correntropy: Properties and Applications in Non-Gaussian Signal Processing
IEEE Transactions on Signal Processing
Supervised nonlinear dimensionality reduction for visualization and classification
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Vector quantization by deterministic annealing
IEEE Transactions on Information Theory
Orthogonal Laplacianfaces for Face Recognition
IEEE Transactions on Image Processing
Hi-index | 0.01 |
In this paper, we propose the regularized discriminant entropy (RDE) which considers both class information and scatter information on original data. Based on the results of maximizing the RDE, we develop a supervised feature extraction algorithm called regularized discriminant entropy analysis (RDEA). RDEA is quite simple and requires no approximation in theoretical derivation. The experiments with several publicly available data sets show the feasibility and effectiveness of the proposed algorithm with encouraging results.