Neural Computation
Regularization with a pruning prior
Neural Networks
Predicting Time Series with Support Vector Machines
ICANN '97 Proceedings of the 7th International Conference on Artificial Neural Networks
Pruning from adaptive regularization
Neural Computation
Selection of Meta-parameters for Support Vector Regression
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
Expert Systems with Applications: An International Journal
A new hyper-parameters selection approach for support vector machines to predict time series
ICPCA/SWS'12 Proceedings of the 2012 international conference on Pervasive Computing and the Networked World
Hi-index | 0.01 |
In using the Ɛ-support vector regression (Ɛ-SVR) algorithm, one has to decide on a suitable value of the insensitivity parameter Ɛ. Smola et al. [6] determined its "optimal" choice based on maximizing the statistical efficiency of a location parameter estimator. While they successfully predicted a linear scaling between the optimal Ɛ and the noise in the data, the value of the theoretically optimal Ɛ does not have a close match with its experimentally observed counterpart. In this paper, we attempt to better explain the experimental results there, by analyzing a toy problem with a closer setting to the Ɛ-SVR. Our resultant predicted choice of Ɛ is much closer to the experimentally observed value, while still demonstrating a linear trend with the data noise.