ANNPR '08 Proceedings of the 3rd IAPR workshop on Artificial Neural Networks in Pattern Recognition
Sparse multinomial kernel discriminant analysis (sMKDA)
Pattern Recognition
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
Sparse support vector regressors based on forward basis selection
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Subspace based linear programming support vector machines
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Subspace based least squares support vector machines for pattern classification
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Sparse kernel feature analysis using FastMap and its variants
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Optimized fixed-size kernel models for large data sets
Computational Statistics & Data Analysis
Sparse least squares support vector regressors trained in the reduced empirical feature space
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
On Learning and Cross-Validation with Decomposed Nyström Approximation of Kernel Matrix
Neural Processing Letters
A novel method of sparse least squares support vector machines in class empirical feature space
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part II
Asymmetric least squares support vector machine classifiers
Computational Statistics & Data Analysis
Hi-index | 0.00 |
In this paper we discuss sparse least squares support vector machines (sparse LS SVMs) trained in the empirical feature space, which is spanned by the mapped training data. First, we show that the kernel associated with the empirical feature space gives the same value with that of the kernel associated with the feature space if one of the arguments of the kernels is mapped into the empirical feature space by the mapping function associated with the feature space. Using this fact, we show that training and testing of kernel-based methods can be done in the empirical feature space and that training of LS SVMs in the empirical feature space results in solving a set of linear equations. We then derive the sparse LS SVMs restricting the linearly independent training data in the empirical feature space by the Cholesky factorization. Support vectors correspond to the selected training data and they do not change even if the value of the margin parameter is changed. Thus for linear kernels, the number of support vectors is the number of input variables at most. By computer experiments we show that we can reduce the number of support vectors without deteriorating the generalization ability.