Pruning neural networks with distribution estimation algorithms

  • Authors:
  • Erick Cantú-Paz

  • Affiliations:
  • Center for Applied Scientific Computing, Lawrence Livermore National Laboratory, Livermore, CA

  • Venue:
  • GECCO'03 Proceedings of the 2003 international conference on Genetic and evolutionary computation: PartI
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes the application of four evolutionary algorithms to the pruning of neural networks used in classification problems. Besides of a simple genetic algorithm (GA), the paper considers three distribution estimation algorithms (DEAs): a compact GA, an extended compact GA, and the Bayesian Optimization Algorithm. The objective is to determine if the DEAs present advantages over the simple GA in terms of accuracy or speed in this problem. The experiments considered a feedforward neural network trained with standard backpropagation and 15 public-domain and artificial data sets. In most cases, the pruned networks seemed to have better or equal accuracy than the original fully-connected networks. We found few differences in the accuracy of the networks pruned by the four EAs, but found large differences in the execution time. The results suggest that a simple GA with a small population might be the best algorithm for pruning networks on the data sets we tested.