A practical Bayesian framework for backpropagation networks
Neural Computation
Introduction to Monte Carlo methods
Learning in graphical models
Bayesian approach for neural networks—review and case studies
Neural Networks
Bayesian Learning for Neural Networks
Bayesian Learning for Neural Networks
Monte Carlo Statistical Methods (Springer Texts in Statistics)
Monte Carlo Statistical Methods (Springer Texts in Statistics)
The lack of a priori distinctions between learning algorithms
Neural Computation
The existence of a priori distinctions between learning algorithms
Neural Computation
Monte Carlo Strategies in Scientific Computing
Monte Carlo Strategies in Scientific Computing
Neural network models for conditional distribution under bayesian analysis
Neural Computation
The Journal of Machine Learning Research
Robust Gaussian Process Regression with a Student-t Likelihood
The Journal of Machine Learning Research
Bayesian population approaches to the analysis of dose escalation studies
Computer Methods and Programs in Biomedicine
Journal of Multivariate Analysis
Nested expectation propagation for Gaussian process classification
The Journal of Machine Learning Research
Data partition methodology for validation of predictive models
Computers & Mathematics with Applications
Hi-index | 0.00 |
In this work, we discuss practical methods for the assessment, comparison, and selection of complex hierarchical Bayesian models. A natural way to assess the goodness of the model is to estimate its future predictive capability by estimating expected utilities. Instead of just making a point estimate, it is important to obtain the distribution of the expected utility estimate because it describes the uncertainty in the estimate. The distributions of the expected utility estimates can also be used to compare models, for example, by computing the probability of one model having a better expected utility than some other model. We propose an approach using cross-validation predictive densities to obtain expected utility estimates and Bayesian bootstrap to obtain samples from their distributions. We also discuss the probabilistic assumptions made and properties of two practical cross-validation methods, importance sampling and k-fold cross-validation. As illustrative examples, we use multilayer perceptron neural networks and gaussian processes with Markov chain Monte Carlo sampling in one toy problem and two challenging real-world problems.