A practical Bayesian framework for backpropagation networks
Neural Computation
Bayesian regularization and pruning using a Laplace prior
Neural Computation
Regularization with a pruning prior
Neural Networks
Bayesian approach for neural networks—review and case studies
Neural Networks
A review of Bayesian neural networks with an application to near infrared spectroscopy
IEEE Transactions on Neural Networks
Information Sciences: an International Journal
MML logistic regression with translation and rotation invariant priors
AI'12 Proceedings of the 25th Australasian joint conference on Advances in Artificial Intelligence
Hi-index | 0.00 |
Neural networks (NN) are famous for their advantageous flexibility for problems when there is insufficient knowledge to set up a proper model. On the other hand, this flexibility can cause overfitting and can hamper the generalization of neural networks. Many approaches to regularizing NN have been suggested but most of them are based on ad hoc arguments. Employing the principle of transformation invariance, we derive a general prior in accordance with the Bayesian probability theory for feed-forward networks. An optimal network is determined by Bayesian model comparison, verifying the applicability of this approach. Additionally the prior presented affords cell pruning.