Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Machine Learning
Sparse least squares support vector training in the reduced empirical feature space
Pattern Analysis & Applications
Subspace based linear programming support vector machines
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Subspace based least squares support vector machines for pattern classification
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
The kernel orthogonal mutual subspace method and its application to 3D object recognition
ACCV'07 Proceedings of the 8th Asian conference on Computer vision - Volume Part II
A framework for 3d object recognition using the kernel constrained mutual subspace method
ACCV'06 Proceedings of the 7th Asian conference on Computer Vision - Volume Part II
Input space versus feature space in kernel-based methods
IEEE Transactions on Neural Networks
Optimizing the kernel in the empirical feature space
IEEE Transactions on Neural Networks
A novel method of sparse least squares support vector machines in class empirical feature space
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part II
Hi-index | 0.00 |
In this paper, we discuss subspace-based support vector machines (SS-SVMs), in which an input vector is classified into the class with the maximum similarity. Namely, for each class we define the weighted similarity measure using the vectors called dictionaries that represent the class, and optimize the weights so that the margin between classes is maximized. Because the similarity measure is defined for each class, for a data sample the similarity measure to which the data sample belongs needs to be the largest among all the similarity measures. Introducing slack variables, we define these constraints either by equality constraints or inequality constraints. As a result we obtain subspace-based least squares SVMs (SSLS-SVMs) and subspace-based linear programming SVMs (SSLP-SVMs). To speedup training of SSLS-SVMs, which are similar to LS-SVMs by all-at-once formulation, we also propose SSLS-SVMs by one-against-all formulation, which optimize each similarity measure separately. Using two-class problems, we clarify the difference of SSLS-SVMs and SSLP-SVMs and evaluate the effectiveness of the proposed methods over the conventional methods with equal weights and with weights equal to eigenvalues.