The nature of statistical learning theory
The nature of statistical learning theory
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Neural Computation
SVMTorch: support vector machines for large-scale regression problems
The Journal of Machine Learning Research
A Modified Finite Newton Method for Fast Solution of Large Scale Linear SVMs
The Journal of Machine Learning Research
Trading convexity for scalability
ICML '06 Proceedings of the 23rd international conference on Machine learning
Training a Support Vector Machine in the Primal
Neural Computation
Fuzzy Weighted Support Vector Regression With a Fuzzy Partition
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Improvements to the SMO algorithm for SVM regression
IEEE Transactions on Neural Networks
Optimal training subset in a support vector regression electric load forecasting model
Applied Soft Computing
Hi-index | 0.00 |
The classical support vector regressions (SVRs) are constructed based on convex loss functions. Since non-convex loss functions to a certain extent own superiority to convex ones in generalization performance and robustness, we propose a non-convex loss function for SVR, and then the concave-convex procedure is utilized to transform the non-convex optimization to convex one. In the following, a Newton-type optimization algorithm is developed to solve the proposed robust SVR in the primal, which can not only retain the sparseness of SVR but also oppress outliers in the training examples. The effectiveness, namely better generalization, is validated through experiments on synthetic and real-world benchmark data sets.