Bayesian Learning and Evolutionary Parameter Optimization

  • Authors:
  • Thomas Ragg

  • Affiliations:
  • -

  • Venue:
  • KI '01 Proceedings of the Joint German/Austrian Conference on AI: Advances in Artificial Intelligence
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper I want to argue that the combination of evolutionary algorithms and neural networks can be fruitful in several ways. When estimating a functional relationship on the basis of empirical data we face three basic problems. Firstly, we have to deal with noisy and finite-sized data sets which is usually done be regularization techniques, for example Bayesian learning. Secondly, for many applications we need to encode the problem by features and have to decide which and how many of them to use. Bearing in mind the empty space phenomenon, it is often an advantage to select few features and estimate a non-linear function in a low-dimensional space. Thirdly, if we have trained several networks, we are left with the problem of model selection. These problems can be tackled by integrating several stochastic methods into an evolutionary search algorithm. The search can be designed such that it explores the parameter space to find regions corresponding to networks with a high posterior probability of being a model for the process, that generated the data. The benefits of the approach are demonstrated on a regression and a classification problem.