Analysis of the Functional Block Involved in the Design of Radial Basis Function Networks
Neural Processing Letters
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Fast Leave-One-Out Evaluation and Improvement on Inference for LS-SVMs
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 3 - Volume 03
TaSe, a Taylor series-based fuzzy system model that combines interpretability and accuracy
Fuzzy Sets and Systems
Non-parametric residual variance estimation in supervised learning
IWANN'07 Proceedings of the 9th international work conference on Artificial neural networks
LS-SVM hyperparameter selection with a nonparametric noise estimator
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Least Squares Support Vector Machines (LS-SVM) are the state of the art in kernel methods for regression and function approximation. In the last few years, these models have been successfully applied to time series modelling and prediction. A key issue for the good performance of a LS-SVM model are the values chosen for both the kernel parameters and its hyperparameters in order to avoid overfitting the underlying system to be modelled. In this paper an efficient method for the evaluation of the cross validation error for LS-SVM is revised. The expressions for its partial derivatives are presented in order to improve the procedure for parameter optimization. Some initial guesses to set the values of both kernel parameters and the regularization factor are also presented. We finally conduct some experiments on a time series data example using a number of methods for parameter optimization for LS-SVM models. The results show that the proposed partial derivatives and heuristics can improve the performance with respect to both execution time and the optimized model obtained.