The nature of statistical learning theory
The nature of statistical learning theory
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
An equivalence between sparse approximation and support vector machines
Neural Computation
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Optimal control by least squares support vector machines
Neural Networks
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Efficient SVM Regression Training with SMO
Machine Learning
Fast SVM Training Algorithm with Decomposition on Very Large Data Sets
IEEE Transactions on Pattern Analysis and Machine Intelligence
Core Vector Machines: Fast SVM Training on Very Large Data Sets
The Journal of Machine Learning Research
A Modified Finite Newton Method for Fast Solution of Large Scale Linear SVMs
The Journal of Machine Learning Research
Training a Support Vector Machine in the Primal
Neural Computation
Twin Support Vector Machines for Pattern Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence
Local prediction of non-linear time series using support vector regression
Pattern Recognition
Robust support vector regression in the primal
Neural Networks
Generalizing the bias term of support vector machines
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
TSVR: An efficient Twin Support Vector Machine for regression
Neural Networks
Improvements to the SMO algorithm for SVM regression
IEEE Transactions on Neural Networks
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
The hard support vector regression (HSVR) usually has a risk of suffering from overfitting due to the presence of noise. The main reason is that it does not utilize the regularization technique to set an upper bound on the Lagrange multipliers so they can be magnified infinitely. Hence, we propose a greedy stagewise based algorithm to approximately train HSVR. At each iteration, the sample which has the maximal predicted discrepancy is selected and its weight is updated only once so as to avoid being excessively magnified. Actually, this early stopping rule can implicitly control the capacity of the regression machine, which is equivalent to a regularization technique. In addition, compared with the well-known software LIBSVM2.82, our algorithm to a certain extent has advantages in both the training time and the number of support vectors. Finally, experimental results on the synthetic and real-world benchmark data sets also corroborate the efficacy of the proposed algorithm.