Issues in Bayesian analysis of neural network models
Neural Computation
Comparison of approximate methods for handling hyperparameters
Neural Computation
Evidence Evaluation for Bayesian Neural Networks Using Contour Monte Carlo
Neural Computation
A cooperative constructive method for neural networks for pattern recognition
Pattern Recognition
Invariance priors for Bayesian feed-forward neural networks
Neural Networks
Artificial Intelligence in Medicine
Dimension reduction in functional regression with applications
Computational Statistics & Data Analysis
Representation of functional data in neural networks
Neurocomputing
Gaussian fitting based FDA for chemometrics
IWANN'07 Proceedings of the 9th international work conference on Artificial neural networks
Information Sciences: an International Journal
A functional density-based nonparametric approach for statistical calibration
CIARP'10 Proceedings of the 15th Iberoamerican congress conference on Progress in pattern recognition, image analysis, computer vision, and applications
Expert Systems with Applications: An International Journal
Consistency of functional learning methods based on derivatives
Pattern Recognition Letters
Predictive neuro-control of uncertain systems: design and use of a neuro-optimizer
Automatica (Journal of IFAC)
Functional k-means inverse regression
Computational Statistics & Data Analysis
Hi-index | 0.01 |
MacKay's (1992) Bayesian framework for backpropagation is a practical and powerful means to improve the generalization ability of neural networks. It is based on a Gaussian approximation to the posterior weight distribution. The framework is extended, reviewed, and demonstrated in a pedagogical way. The notation is simplified using the ordinary weight decay parameter, and a detailed and explicit procedure for adjusting several weight decay parameters is given. Bayesian backprop is applied in the prediction of fat content in minced meat from near infrared spectra. It outperforms “early stopping” as well as quadratic regression. The evidence of a committee of differently trained networks is computed, and the corresponding improved generalization is verified. The error bars on the predictions of the fat content are computed. There are three contributors: The random noise, the uncertainty in the weights, and the deviation among the committee members. The Bayesian framework is compared to Moody's GPE (1992). Finally, MacKay and Neal's automatic relevance determination, in which the weight decay parameters depend on the input number, is applied to the data with improved results