The nature of statistical learning theory
The nature of statistical learning theory
Boosting as a Regularized Path to a Maximum Margin Classifier
The Journal of Machine Learning Research
The Entire Regularization Path for the Support Vector Machine
The Journal of Machine Learning Research
Neural Computation
A kernel path algorithm for support vector machines
Proceedings of the 24th international conference on Machine learning
Bi-level path following for cross validated solution of kernel quantile regression
Proceedings of the 25th international conference on Machine learning
Regularization Paths for ν-SVM and ν-SVR
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks, Part III
Classification model selection via bilevel programming
Optimization Methods & Software - Mathematical programming in data mining and machine learning
Bi-Level Path Following for Cross Validated Solution of Kernel Quantile Regression
The Journal of Machine Learning Research
Model combination for support vector regression via regularization path
PRICAI'12 Proceedings of the 12th Pacific Rim international conference on Trends in Artificial Intelligence
Learning nonlinear hybrid systems: from sparse optimization to support vector regression
Proceedings of the 16th international conference on Hybrid systems: computation and control
Hi-index | 0.00 |
Recently, a very appealing approach was proposed to compute the entire solution path for support vector classification (SVC) with very low extra computational cost. This approach was later extended to a support vector regression (SVR) model called ε-SVR. However, the method requires that the error parameter ε be set a priori, which is only possible if the desired accuracy of the approximation can be specified in advance. In this paper, we show that the solution path for ε-SVR is also piecewise linear with respect to ε. We further propose an efficient algorithm for exploring the two-dimensional solution space defined by the regularization and error parameters. As opposed to the algorithm for SVC, our proposed algorithm for ε-SVR initializes the number of support vectors to zero and then increases it gradually as the algorithm proceeds. As such, a good regression function possessing the sparseness property can be obtained after only a few iterations.