The nature of statistical learning theory
The nature of statistical learning theory
Prediction with Gaussian processes: from linear regression to linear prediction and beyond
Learning in graphical models
Sparse on-line Gaussian processes
Neural Computation
Evaluation of gaussian processes and other methods for non-linear regression
Evaluation of gaussian processes and other methods for non-linear regression
Adaptive Sparseness for Supervised Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Adaptive spherical Gaussian kernel in sparse Bayesian learning framework for nonlinear regression
Expert Systems with Applications: An International Journal
Sparse Kernel ridge regression using backward deletion
PRICAI'06 Proceedings of the 9th Pacific Rim international conference on Artificial intelligence
Evaluation of feature selection by multiclass kernel discriminant analysis
ANNPR'10 Proceedings of the 4th IAPR TC3 conference on Artificial Neural Networks in Pattern Recognition
Hi-index | 0.00 |
Gaussian Processes (GPs) have state of the art performance in regression. In GPs, all the basis functions are required for prediction; hence its test speed is slower than other learning algorithms such as support vector machines (SVMs), relevance vector machine (RVM), adaptive sparseness (AS), etc. To overcome this limitation, we present a backward elimination algorithm, called GPs-BE that recursively selects the basis functions for GPs until some stop criterion is satisfied. By integrating rank-1 update, GPs-BE can be implemented at a reasonable cost. Extensive empirical comparisons confirm the feasibility and validity of the proposed algorithm.