The nature of statistical learning theory
The nature of statistical learning theory
An equivalence between sparse approximation and support vector machines
Neural Computation
Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
Least Squares Support Vector Machine Classifiers
Neural Processing Letters
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
On the Optimality of the Backward Greedy Algorithm for the Subset Selection Problem
SIAM Journal on Matrix Analysis and Applications
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
SMO algorithm for least-squares SVM formulations
Neural Computation
Ridge Regression Learning Algorithm in Dual Variables
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Training Support Vector Machines: an Application to Face Detection
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
A fast kernel-based nonlinear discriminant analysis for multi-class problems
Pattern Recognition
A Method for Rapid Feature Extraction Based on KMSE
GCIS '09 Proceedings of the 2009 WRI Global Congress on Intelligent Systems - Volume 04
Kernel conjugate gradient for fast kernel machines
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Sparse Kernel ridge regression using backward deletion
PRICAI'06 Proceedings of the 9th Pacific Rim international conference on Artificial intelligence
Sparse approximation through boosting for learning large scale kernel machines
IEEE Transactions on Neural Networks
Pruning error minimization in least squares support vector machines
IEEE Transactions on Neural Networks
Fast Sparse Approximation for Least Squares Support Vector Machine
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
In this paper, a fast method of selecting features for kernel minimum squared error (KMSE) is proposed to mitigate the computational burden in the case where the size of the training patterns is large. Compared with other existent algorithms of selecting features for KMSE, this iterative KMSE, viz. IKMSE, shows better property of enhancing the computational efficiency without sacrificing the generalization performance. Experimental reports on the benchmark data sets, nonlinear autoregressive model and real problem address the efficacy and feasibility of the proposed IKMSE. In addition, IKMSE can be easily extended to classification fields.