2010 Special Issue: Meta-learning approach to neural network optimization

  • Authors:
  • Pavel Kordík;Jan Koutník;Jan Drchal;Oleg Kovářík;Miroslav epek;Miroslav Šnorek

  • Affiliations:
  • Department of Computer Science and Engineering, FEE, Czech Technical University, Prague, Czech Republic;Department of Computer Science and Engineering, FEE, Czech Technical University, Prague, Czech Republic;Department of Computer Science and Engineering, FEE, Czech Technical University, Prague, Czech Republic;Department of Computer Science and Engineering, FEE, Czech Technical University, Prague, Czech Republic;Department of Computer Science and Engineering, FEE, Czech Technical University, Prague, Czech Republic;Department of Computer Science and Engineering, FEE, Czech Technical University, Prague, Czech Republic

  • Venue:
  • Neural Networks
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Optimization of neural network topology, weights and neuron transfer functions for given data set and problem is not an easy task. In this article, we focus primarily on building optimal feed-forward neural network classifier for i.i.d. data sets. We apply meta-learning principles to the neural network structure and function optimization. We show that diversity promotion, ensembling, self-organization and induction are beneficial for the problem. We combine several different neuron types trained by various optimization algorithms to build a supervised feed-forward neural network called Group of Adaptive Models Evolution (GAME). The approach was tested on a large number of benchmark data sets. The experiments show that the combination of different optimization algorithms in the network is the best choice when the performance is averaged over several real-world problems.