Numerical recipes in C (2nd ed.): the art of scientific computing
Numerical recipes in C (2nd ed.): the art of scientific computing
Bayesian leaning for neural networks
Bayesian leaning for neural networks
Local dimensionality reduction
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Comparison of approximate methods for handling hyperparameters
Neural Computation
Bayesian parameter estimation via variational methods
Statistics and Computing
Locally Weighted Projection Regression: Incremental Real Time Learning in High Dimensional Space
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Variational Relevance Vector Machines
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Dimensionality Reduction for Supervised Learning with Reproducing Kernel Hilbert Spaces
The Journal of Machine Learning Research
Bayesian regression with input noise for high dimensional data
ICML '06 Proceedings of the 23rd international conference on Machine learning
Extended linear models with Gaussian prior on the parameters and adaptive expansion vectors
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Bayesian robot system identification with input and output noise
Neural Networks
Hi-index | 0.00 |
Traditional non-parametric statistical learning techniques are often computationally attractive, but lack the same generalization and model selection abilities as state-of-the-art Bayesian algorithms which, however, are usually computationally prohibitive. This paper makes several important contributions that allow Bayesian learning to scale to more complex, real-world learning scenarios. Firstly, we show that backfitting --- a traditional non-parametric, yet highly efficient regression tool --- can be derived in a novel formulation within an expectation maximization (EM) framework and thus can finally be given a probabilistic interpretation. Secondly, we show that the general framework of sparse Bayesian learning and in particular the relevance vector machine (RVM), can be derived as a highly efficient algorithm using a Bayesian version of backfitting at its core. As we demonstrate on several regression and classification benchmarks, Bayesian backfitting offers a compelling alternative to current regression methods, especially when the size and dimensionality of the data challenge computational resources.