Healing the relevance vector machine through augmentation

  • Authors:
  • Carl Edward Rasmussen;Joaquin Quiñonero-Candela

  • Affiliations:
  • Max Planck Institute for Biological Cybernetics, Tübingen, Germany;Friedrich Miescher Laboratory, Max Planck Society, Tübingen, Germany

  • Venue:
  • ICML '05 Proceedings of the 22nd international conference on Machine learning
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Relevance Vector Machine (RVM) is a sparse approximate Bayesian kernel method. It provides full predictive distributions for test cases. However, the predictive uncertainties have the unintuitive property, that they get smaller the further you move away from the training cases. We give a thorough analysis. Inspired by the analogy to non-degenerate Gaussian Processes, we suggest augmentation to solve the problem. The purpose of the resulting model, RVM*, is primarily to corroborate the theoretical and experimental analysis. Although RVM* could be used in practical applications, it is no longer a truly sparse model. Experiments show that sparsity comes at the expense of worse predictive. distributions.