The nature of statistical learning theory
The nature of statistical learning theory
Smoothing methods for convex inequalities and linear complementarity problems
Mathematical Programming: Series A and B
A class of smoothing functions for nonlinear and mixed complementarity problems
Computational Optimization and Applications
On Homotopy-Smoothing Methods for Box-Constrained Variational Inequalities
SIAM Journal on Control and Optimization
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Robust Linear and Support Vector Regression
IEEE Transactions on Pattern Analysis and Machine Intelligence
Proximal support vector machine classifiers
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Smooth Approximations to Nonlinear Complementarity Problems
SIAM Journal on Optimization
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Large Scale Kernel Regression via Linear Programming
Machine Learning
Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Classics in Applied Mathematics, 16)
A study on reduced support vector machines
IEEE Transactions on Neural Networks
Active set support vector regression
IEEE Transactions on Neural Networks
Comparison of approaches for estimating reliability of individual regression predictions
Data & Knowledge Engineering
CCDC'09 Proceedings of the 21st annual international conference on Chinese control and decision conference
TSVR: An efficient Twin Support Vector Machine for regression
Neural Networks
IEEE Transactions on Neural Networks
On Lagrangian support vector regression
Expert Systems with Applications: An International Journal
The forecasting model based on modified SVRM and PSO penalizing Gaussian noise
Expert Systems with Applications: An International Journal
A new smooth support vector machine
AICI'10 Proceedings of the 2010 international conference on Artificial intelligence and computational intelligence: Part I
A new smooth support vector regression based on ε-insensitive logistic loss function
ICNC'05 Proceedings of the First international conference on Advances in Natural Computation - Volume Part I
Generalized recurrent neural network for ε-insensitive support vector regression
Mathematics and Computers in Simulation
Asymptotic error bounds for kernel-based Nyström low-rank approximation matrices
Journal of Multivariate Analysis
Finite Newton method for implicit Lagrangian support vector regression
International Journal of Knowledge-based and Intelligent Engineering Systems
TS-fuzzy modeling based on ε-insensitive smooth support vector regression
Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology
Hi-index | 0.00 |
A new smoothing strategy for solving \epsilon{\hbox{-}}{\rm{support}} vector regression (\epsilon{\hbox{-}}{\rm{SVR}}), tolerating a small error in fitting a given data set linearly or nonlinearly, is proposed in this paper. Conventionally, \epsilon{\hbox{-}}{\rm{SVR}} is formulated as a constrained minimization problem, namely, a convex quadratic programming problem. We apply the smoothing techniques that have been used for solving the support vector machine for classification, to replace the \epsilon{\hbox{-}}{\rm{insensitive}} loss function by an accurate smooth approximation. This will allow us to solve \epsilon{\hbox{-}}{\rm{SVR}} as an unconstrained minimization problem directly. We term this reformulated problem as \epsilon{\hbox{-}}{\rm{smooth}} support vector regression (\epsilon{\hbox{-}}{\rm{SSVR}}). We also prescribe a Newton-Armijo algorithm that has been shown to be convergent globally and quadratically to solve our \epsilon{\hbox{-}}{\rm{SSVR}}. In order to handle the case of nonlinear regression with a massive data set, we also introduce the reduced kernel technique in this paper to avoid the computational difficulties in dealing with a huge and fully dense kernel matrix. Numerical results and comparisons are given to demonstrate the effectiveness and speed of the algorithm.