Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
Machine learning, neural and statistical classification
Machine learning, neural and statistical classification
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
Linear Dimensionality Reduction via a Heteroscedastic Extension of LDA: The Chernoff Criterion
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Face Recognition Using Laplacianfaces
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning the Kernel with Hyperkernels
The Journal of Machine Learning Research
Locally adaptive classification piloted by uncertainty
ICML '06 Proceedings of the 23rd international conference on Machine learning
Learning a kernel function for classification with small training samples
ICML '06 Proceedings of the 23rd international conference on Machine learning
Local Fisher discriminant analysis for supervised dimensionality reduction
ICML '06 Proceedings of the 23rd international conference on Machine learning
Subclass Discriminant Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Graph Embedding and Extensions: A General Framework for Dimensionality Reduction
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
A kernel optimization method based on the localized kernel Fisher criterion
Pattern Recognition
Extending kernel fisher discriminant analysis with the weighted pairwise chernoff criterion
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part IV
A two-distribution compounded statistical model for Radar HRRP target recognition
IEEE Transactions on Signal Processing - Part I
Radar HRRP target recognition based on higher order spectra
IEEE Transactions on Signal Processing
The evidence framework applied to support vector machines
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Optimizing the kernel in the empirical feature space
IEEE Transactions on Neural Networks
Semi-supervised fuzzy clustering: A kernel-based approach
Knowledge-Based Systems
Optimal Double-Kernel Combination for Classification
MLDM '09 Proceedings of the 6th International Conference on Machine Learning and Data Mining in Pattern Recognition
Sparse ensembles using weighted combination methods based on linear programming
Pattern Recognition
A novel robust kernel for visual learning problems
Neurocomputing
Density-induced margin support vector machines
Pattern Recognition
Hi-index | 0.01 |
The kernel functions play a central role in kernel methods, accordingly over the years the optimization of kernel functions has been a promising research area. Ideally Fisher discriminant criteria can be used as an objective function to optimize the kernel function to augment the margin between different classes. Unfortunately, Fisher criteria are optimal only in the case that all the classes are generated from underlying multivariate normal distributions of common covariance matrix but different means and each class is expressed by a single cluster. Due to the assumptions, Fisher criteria obviously are not a suitable choice as a kernel optimization rule in some applications such as the multimodally distributed data. In order to solve this problem, recently many improved discriminant criteria (DC) have been also developed. Therefore, to apply these discriminant criteria to kernel optimization, in this paper based on a data-dependent kernel function we propose a unified kernel optimization framework, which can use any discriminant criteria formulated in a pairwise manner as the objective functions. Under the kernel optimization framework, to employ different discriminant criteria, one has to only change the corresponding affinity matrices without having to resort to any complex derivations in feature space. Experimental results based on some benchmark data demonstrate the efficiency of our method.