Neural networks and the bias/variance dilemma
Neural Computation
Ridge Regression Learning Algorithm in Dual Variables
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Recursive reduced least squares support vector regression
Pattern Recognition
Optimized fixed-size kernel models for large data sets
Computational Statistics & Data Analysis
Sparse approximation through boosting for learning large scale kernel machines
IEEE Transactions on Neural Networks
Error tolerance based support vector machine for regression
Neurocomputing
Sparse projections of medical images onto manifolds
IPMI'13 Proceedings of the 23rd international conference on Information Processing in Medical Imaging
Hi-index | 0.00 |
Ridge regression is a classical statistical technique that attempts to address the bias-variance trade-off in the design of linear regression models. A reformulation of ridge regression in dual variables permits a non-linear form of ridge regression via the well-known ‘kernel trick’. Unfortunately, unlike support vector regression models, the resulting kernel expansion is typically fully dense. In this paper, we introduce a reduced rank kernel ridge regression (RRKRR) algorithm, capable of generating an optimally sparse kernel expansion that is functionally identical to that resulting from conventional kernel ridge regression (KRR). The proposed method is demonstrated to out-perform an alternative sparse kernel ridge regression algorithm on the Motorcycle and Boston Housing benchmarks.