Scientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey
A Unified Loss Function in Bayesian Framework for Support Vector Regression
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Sparse Greedy Matrix Approximation for Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Building Support Vector Machines with Reduced Classifier Complexity
The Journal of Machine Learning Research
Optimized Local Kernel Machines for Fast Time Series Forecasting
ICNC '07 Proceedings of the Third International Conference on Natural Computation - Volume 01
Hi-index | 0.01 |
Kernel machines have been widely used in learning. However, standard algorithms are often time consuming. To this end, we propose a new method, direct simplification (DS) for imposing the sparsity of kernel regression machines. Different to the existing sparse methods, DS performs approximation and optimization in a unified framework by incrementally finding a set of basis functions that minimizes the primal risk function directly. The main advantage of our method lies in its ability to form very good approximations for kernel regression machines with a clear control on the computation complexity as well as the training time. Experiments on two real time series and two benchmarks assess the feasibility of our method and show that DS can obtain better performance with fewer bases compared with two-step-type sparse method.