Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems
Theoretical Computer Science
On the Optimality of the Backward Greedy Algorithm for the Subset Selection Problem
SIAM Journal on Matrix Analysis and Applications
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
A note on the decomposition methods for support vector regression
Neural Computation
SVMTorch: support vector machines for large-scale regression problems
The Journal of Machine Learning Research
An introduction to variable and feature selection
The Journal of Machine Learning Research
Dimensionality reduction via sparse support vector machines
The Journal of Machine Learning Research
Variable selection using svm based criteria
The Journal of Machine Learning Research
Use of the zero norm with linear models and kernel methods
The Journal of Machine Learning Research
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Convex Optimization
Leave-One-Out Bounds for Support Vector Regression Model Selection
Neural Computation
Gradient-Based Optimization of Hyperparameters
Neural Computation
Bounds on Error Expectation for Support Vector Machines
Neural Computation
The optimization model based on support vector machine for the energy-conserving generation dispatch
ICNC'09 Proceedings of the 5th international conference on Natural computation
A local information-based feature-selection algorithm for data regression
Pattern Recognition
Hi-index | 0.01 |
This paper addresses the problem of variable ranking for support vector regression. The ranking criteria that we proposed are based on leave-one-out bounds and some variants and for these criteria we have compared different search-space algorithms: recursive feature elimination and scaling factor optimization based on gradient-descent. All these algorithms have been compared on toy problems and real-world QSAR data sets. Results show that the radius-margin criterion is the most efficient criterion for ranking variables. Using this criterion can then lead to support vector regressor with improved error rate while using fewer variables. Our results also support the evidence that gradient-descent algorithm achieves a better variable ranking compared to backward algorithm.