Keeping the neural networks simple by minimizing the description length of the weights
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
A Bayesian/Information Theoretic Model of Learning to Learn viaMultiple Task Sampling
Machine Learning - Special issue on inductive transfer
Machine Learning - Special issue on inductive transfer
Ensemble learning for multi-layer networks
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Radial basis functions: a Bayesian treatment
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
A view of the EM algorithm that justifies incremental, sparse, and other variants
Learning in graphical models
Comparison of approximate methods for handling hyperparameters
Neural Computation
Bayesian Learning for Neural Networks
Bayesian Learning for Neural Networks
Empirical Bayes for Learning to Learn
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Hi-index | 0.00 |
We describe two specific examples of neural-Bayesian approaches for complex modeling tasks: survival analysis and multitask learning. In both cases, we can come up with reasonable priors on the parameters of the neural network. As a result, the Bayesian approaches improve their (maximum likelihood) frequentist counterparts dramatically. By illustrating their application on the models under study, we review and compare algorithms that can be used for Bayesian inference: Laplace approximation, variational algorithms, Monte Carlo sampling, and empirical Bayes.