Kernel Matched Subspace Detectors for Hyperspectral Target Detection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Face recognition by Kernel independent component analysis
IEA/AIE'2005 Proceedings of the 18th international conference on Innovations in Applied Artificial Intelligence
Kernel Spectral Matched Filter for Hyperspectral Imagery
International Journal of Computer Vision
Gene subset selection in kernel-induced feature space
Pattern Recognition Letters
On solving the face recognition problem with one training sample per subject
Pattern Recognition
Gene subset selection in kernel-induced feature space
Pattern Recognition Letters
A comparative analysis of kernel subspace target detectors for hyperspectral imagery
EURASIP Journal on Applied Signal Processing
A kernel optimization method based on the localized kernel Fisher criterion
Pattern Recognition
Kernel quadratic discriminant analysis for small sample size problem
Pattern Recognition
Geometrical synthesis of MLP neural networks
Neurocomputing
Kernel class-wise locality preserving projection
Information Sciences: an International Journal
Kernel quadratic discriminant analysis for small sample size problem
Pattern Recognition
Deformation Measurement of the Large Flexible Surface by Improved RBFNN Algorithm and BPNN Algorithm
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks, Part III
Fourier-Based Inspection of Free-Form Reflective Surfaces
ACIVS '08 Proceedings of the 10th International Conference on Advanced Concepts for Intelligent Vision Systems
Gaussian kernel optimization for pattern classification
Pattern Recognition
Optimal Double-Kernel Combination for Classification
MLDM '09 Proceedings of the 6th International Conference on Machine Learning and Data Mining in Pattern Recognition
A novel kernel-based maximum a posteriori classification method
Neural Networks
An MRF-based kernel method for nonlinear feature extraction
Image and Vision Computing
Designing Model Based Classifiers by Emphasizing Soft Targets
Fundamenta Informaticae - Advances in Artificial Intelligence and Applications
Boosting by weighting critical and erroneous samples
Neurocomputing
The beneficial effects of using multi-net systems that focus on hard patterns
MCS'03 Proceedings of the 4th international conference on Multiple classifier systems
SCIA'07 Proceedings of the 15th Scandinavian conference on Image analysis
Fuzzy classifiers based on kernel discriminant analysis
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Novel maximum-margin training algorithms for supervised neural networks
IEEE Transactions on Neural Networks
A kernel optimization method based on the localized kernel fisher criterion
ISNN'06 Proceedings of the Third international conference on Advances in Neural Networks - Volume Part I
Twin Mahalanobis distance-based support vector machines for pattern recognition
Information Sciences: an International Journal
Designing Model Based Classifiers by Emphasizing Soft Targets
Fundamenta Informaticae - Advances in Artificial Intelligence and Applications
A minimax probabilistic approach to feature transformation for multi-class data
Applied Soft Computing
Semi-supervised kernel minimum squared error based on manifold structure
ISNN'13 Proceedings of the 10th international conference on Advances in Neural Networks - Volume Part I
Hi-index | 0.00 |
The eigenstructure of the second-order statistics of a multivariate random population can be inferred from the matrix of pairwise combinations of inner products of the samples. Therefore, it can be also efficiently obtained in the implicit, high-dimensional feature spaces defined by kernel functions. We elaborate on this property to obtain general expressions for immediate derivation of nonlinear counterparts of a number of standard pattern analysis algorithms, including principal component analysis, data compression and denoising, and Fisher's discriminant. The connection between kernel methods and nonparametric density estimation is also illustrated. Using these results we introduce the kernel version of Mahalanobis distance, which originates nonparametric models with unexpected and interesting properties, and also propose a kernel version of the minimum squared error (MSE) linear discriminant function. This learning machine is particularly simple and includes a number of generalized linear models such as the potential functions method or the radial basis function (RBF) network. Our results shed some light on the relative merit of feature spaces and inductive bias in the remarkable generalization properties of the support vector machine (SVM). Although in most situations the SVM obtains the lowest error rates, exhaustive experiments with synthetic and natural data show that simple kernel machines based on pseudoinversion are competitive in problems with appreciable class overlapping