An evolutionary approach for achieving scalability with general regression neural networks

  • Authors:
  • Kenan Casey;Aaron Garrett;Joseph Gay;Lacey Montgomery;Gerry Dozier

  • Affiliations:
  • Department of Computer Science and Software Engineering, Auburn University, Auburn, USA;Department of Computer Science and Software Engineering, Auburn University, Auburn, USA;Department of Computer Science and Software Engineering, Auburn University, Auburn, USA;Department of Computer Science and Software Engineering, Auburn University, Auburn, USA;Department of Computer Science and Software Engineering, Auburn University, Auburn, USA

  • Venue:
  • Natural Computing: an international journal
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present an approach to overcome the scalability issues associated with instance-based learners. Our system uses evolutionary computational techniques to determine the minimal set of training instances needed to achieve good classification accuracy with an instance-based learner. In this way, instance-based learners need not store all the training data available but instead store only those instances that are required for the desired accuracy. Additionally, we explore the utility of evolving the optimal feature set used by the learner for a given problem. In this way, we attempt to deal with the so-called "curse of dimensionality" associated with computational learning systems. To these ends, we introduce the Evolutionary General Regression Neural Network. This design uses an estimation of distribution algorithm to generate both the optimal training set as well as the optimal feature set for a general regression neural network. We compare its performance against a standard general regression neural network and an optimized support vector machine across four benchmark classification problems.