Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
Machine Learning
Ridge Regression Learning Algorithm in Dual Variables
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Efficient svm training using low-rank kernel representations
The Journal of Machine Learning Research
Benchmarking Least Squares Support Vector Machine Classifiers
Machine Learning
LS Bound based gene selection for DNA microarray data
Bioinformatics
Bounds on Error Expectation for Support Vector Machines
Neural Computation
Preventing Over-Fitting during Model Selection via Bayesian Regularisation of the Hyper-Parameters
The Journal of Machine Learning Research
Incremental training of support vector machines
IEEE Transactions on Neural Networks
IWANN '09 Proceedings of the 10th International Work-Conference on Artificial Neural Networks: Part I: Bio-Inspired Systems: Computational and Ambient Intelligence
Optimal Locality Regularized Least Squares Support Vector Machine via Alternating Optimization
Neural Processing Letters
A sequential algorithm for sparse support vector classifiers
Pattern Recognition
Hi-index | 0.00 |
Least squares support vector machine (LS-SVM) classifiers are a class of kernel methods whose solution follows from a set of linear equations. In this work we present low rank modifications to the LS-SVM classifiers that are useful for fast and efficient variable selection. The inclusion or removal of a candidate variable can be represented as a low rank modification to the kernel matrix (linear kernel) of the LS-SVM classifier. In this way, the LS-SVM solution can be updated rather than being recomputed, which improves the efficiency of the overall variable selection process. Relevant variables are selected according to a closed form of the leave-one-out (LOO) error estimator, which is obtained as a by-product of the low rank modifications. The proposed approach is applied to several benchmark data sets as well as two microarray data sets. When compared to other related algorithms used for variable selection, simulations applying our approach clearly show a lower computational complexity together with good stability on the generalization error.