The nature of statistical learning theory
The nature of statistical learning theory
Making large-scale support vector machine learning practical
Advances in kernel methods
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Efficient SVM Regression Training with SMO
Machine Learning
SVMTorch: support vector machines for large-scale regression problems
The Journal of Machine Learning Research
Improvements to Platt's SMO Algorithm for SVM Classifier Design
Neural Computation
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Improvements to the SMO algorithm for SVM regression
IEEE Transactions on Neural Networks
On the convergence of the decomposition method for support vector machines
IEEE Transactions on Neural Networks
A formal analysis of stopping criteria of decomposition methods for support vector machines
IEEE Transactions on Neural Networks
Analysis of SVM regression bounds for variable ranking
Neurocomputing
QP Algorithms with Guaranteed Accuracy and Run Time for Support Vector Machines
The Journal of Machine Learning Research
General Polynomial Time Decomposition Algorithms
The Journal of Machine Learning Research
On the complexity of working set selection
Theoretical Computer Science
Global Convergence Analysis of Decomposition Methods for Support Vector Regression
ISNN '08 Proceedings of the 5th international symposium on Neural Networks: Advances in Neural Networks
Continuous speech recognition with sparse coding
Computer Speech and Language
General polynomial time decomposition algorithms
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Large-scale linear support vector regression
The Journal of Machine Learning Research
Hi-index | 0.00 |
The dual formulation of support vector regression involves two closely related sets of variables. When the decomposition method is used, many existing approaches use pairs of indices from these two sets as the working set. Basically, they select a base set first and then expand it so all indices are pairs. This makes the implementation different from that for support vector classification. In addition, a larger optimization subproblem has to be solved in each iteration. We provide theoretical proofs and conduct experiments to show that using the base set as the working set leads to similar convergence (number of iterations). Therefore, by using a smaller working set while keeping a similar number of iterations, the program can be simpler and more efficient.