The nature of statistical learning theory
The nature of statistical learning theory
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
Kernel partial least squares regression in reproducing kernel hilbert space
The Journal of Machine Learning Research
Dimension Reduction in Text Classification with Support Vector Machines
The Journal of Machine Learning Research
Building Sparse Large Margin Classifiers
ICML '05 Proceedings of the 22nd international conference on Machine learning
An efficient kernel matrix evaluation measure
Pattern Recognition
Learning subspace kernels for classification
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Hi-index | 0.00 |
Kernel based nonlinear Feature Extraction (KFE) or dimensionality reduction is a widely used preprocessing step in pattern classification and data mining tasks. Given a positive definite kernel function, it is well known that the input data are implicitly mapped to a feature space with usually very high dimensionality. The goal of KFE is to find a low dimensional subspace of this feature space, which retains most of the information needed for classification or data analysis. In this paper, we propose a subspace kernel based on which the feature extraction problem is transformed to a kernel parameter learning problem. The key observation is that when projecting data into a low dimensional subspace of the feature space, the parameters that are used for describing this subspace can be regarded as the parameters of the kernel function between the projected data. Therefore current kernel parameter learning methods can be adapted to optimize this parameterized kernel function. Experimental results are provided to validate the effectiveness of the proposed approach.