COLT '90 Proceedings of the third annual workshop on Computational learning theory
Making large-scale support vector machine learning practical
Advances in kernel methods
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Ridge Regression Learning Algorithm in Dual Variables
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
On-line prediction with kernels and the complexity approximation principle
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
Algorithmic Learning in a Random World
Algorithmic Learning in a Random World
Prediction, Learning, and Games
Prediction, Learning, and Games
Weighted Kernel Regression for Predicting Changing Dependencies
ECML '07 Proceedings of the 18th European conference on Machine Learning
Hi-index | 0.00 |
Kernel Ridge Regression (KRR) and the recently developed Kernel Aggregating Algorithm for Regression (KAAR) are regression methods based on Least Squares. KAAR has theoretical advantages over KRR since a bound on its square loss for the worst case is known that does not hold for KRR. This bound does not make any assumptions about the underlying probability distribution of the data. In practice, however, KAAR performs better only when the data is heavily corrupted by noise or has severe outliers. This is due to the fact that KAAR is similar to KRR but with some fairly strong extra regularisation. In this paper we develop KAAR in such a way as to make it practical for use on real world data. This is achieved by controlling the amount of extra regularisation. Empirical results (including results on the well known Boston Housing dataset) suggest that in general our new methods perform as well as or better than KRR, KAAR and Support Vector Machines (SVM) in terms of the square loss they suffer.