Invariance priors for Bayesian feed-forward neural networks

  • Authors:
  • Udo v. Toussaint;Silvio Gori;Volker Dose

  • Affiliations:
  • Centre for Interdisciplinary Plasma Science, Max-Planck-Institut für Plasmaphysik, EURATOM Association, Boltzmannstr. 2, D-85748 Garching, Germany;Centre for Interdisciplinary Plasma Science, Max-Planck-Institut für Plasmaphysik, EURATOM Association, Boltzmannstr. 2, D-85748 Garching, Germany;Centre for Interdisciplinary Plasma Science, Max-Planck-Institut für Plasmaphysik, EURATOM Association, Boltzmannstr. 2, D-85748 Garching, Germany

  • Venue:
  • Neural Networks
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Neural networks (NN) are famous for their advantageous flexibility for problems when there is insufficient knowledge to set up a proper model. On the other hand, this flexibility can cause overfitting and can hamper the generalization of neural networks. Many approaches to regularizing NN have been suggested but most of them are based on ad hoc arguments. Employing the principle of transformation invariance, we derive a general prior in accordance with the Bayesian probability theory for feed-forward networks. An optimal network is determined by Bayesian model comparison, verifying the applicability of this approach. Additionally the prior presented affords cell pruning.