CPBUM neural networks for modeling with outliers and noise
Applied Soft Computing
Extended support vector interval regression networks for interval input-output data
Information Sciences: an International Journal
Boundary Processing of HHT Using Support Vector Regression Machines
ICCS '07 Proceedings of the 7th international conference on Computational Science, Part III: ICCS 2007
Hybrid robust approach for TSK fuzzy modeling with outliers
Expert Systems with Applications: An International Journal
Financial time series forecasting using independent component analysis and support vector regression
Decision Support Systems
TS-fuzzy system-based support vector regression
Fuzzy Sets and Systems
Hybrid robust support vector machines for regression with outliers
Applied Soft Computing
Expert Systems with Applications: An International Journal
Engineering Applications of Artificial Intelligence
Expert Systems with Applications: An International Journal
A sparse kernel algorithm for online time series data prediction
Expert Systems with Applications: An International Journal
Twin least squares support vector regression
Neurocomputing
Hi-index | 0.01 |
Support vector regression (SVR) employs the support vector machine (SVM) to tackle problems of function approximation and regression estimation. SVR has been shown to have good robust properties against noise. When the parameters used in SVR are improperly selected, overfitting phenomena may still occur. However, the selection of various parameters is not straightforward. Besides, in SVR, outliers may also possibly be taken as support vectors. Such an inclusion of outliers in support vectors may lead to seriously overfitting phenomena. In this paper, a novel regression approach, termed as the robust support vector regression (RSVR) network, is proposed to enhance the robust capability of SVR. In the approach, traditional robust learning approaches are employed to improve the learning performance for any selected parameters. From the simulation results, our RSVR can always improve the performance of the learned systems for all cases. Besides, it can be found that even the training lasted for a long period, the testing errors would not go up. In other words, the overfitting phenomenon is indeed suppressed.