The bias-variance tradeoff and the randomized GACV
Proceedings of the 1998 conference on Advances in neural information processing systems II
Sparse on-line Gaussian processes
Neural Computation
Sparse Greedy Matrix Approximation for Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Neural Computation
A Unifying View of Sparse Approximate Gaussian Process Regression
The Journal of Machine Learning Research
Validation-based sparse gaussian process classifier design
Neural Computation
Sparse Spectrum Gaussian Process Regression
The Journal of Machine Learning Research
Evaluating predictive uncertainty challenge
MLCW'05 Proceedings of the First international conference on Machine Learning Challenges: evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment
Embedding relevance vector machine in fuzzy inference system for energy consumption forecasting
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part II
Hi-index | 0.00 |
The Relevance Vector Machine (RVM) is a sparse approximate Bayesian kernel method. It provides full predictive distributions for test cases. However, the predictive uncertainties have the unintuitive property, that they get smaller the further you move away from the training cases. We give a thorough analysis. Inspired by the analogy to non-degenerate Gaussian Processes, we suggest augmentation to solve the problem. The purpose of the resulting model, RVM*, is primarily to corroborate the theoretical and experimental analysis. Although RVM* could be used in practical applications, it is no longer a truly sparse model. Experiments show that sparsity comes at the expense of worse predictive. distributions.