Pareto evolutionary neural networks

  • Authors:
  • J. E. Fieldsend;S. Singh

  • Affiliations:
  • Dept. of Comput. Sci., Univ. of Exeter, UK;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 2005

Quantified Score

Hi-index 0.01

Visualization

Abstract

For the purposes of forecasting (or classification) tasks neural networks (NNs) are typically trained with respect to Euclidean distance minimization. This is commonly the case irrespective of any other end user preferences. In a number of situations, most notably time series forecasting, users may have other objectives in addition to Euclidean distance minimization. Recent studies in the NN domain have confronted this problem by propagating a linear sum of errors. However this approach implicitly assumes a priori knowledge of the error surface defined by the problem, which, typically, is not the case. This study constructs a novel methodology for implementing multiobjective optimization within the evolutionary neural network (ENN) domain. This methodology enables the parallel evolution of a population of ENN models which exhibit estimated Pareto optimality with respect to multiple error measures. A new method is derived from this framework, the Pareto evolutionary neural network (Pareto-ENN). The Pareto-ENN evolves a population of models that may be heterogeneous in their topologies inputs and degree of connectivity, and maintains a set of the Pareto optimal ENNs that it discovers. New generalization methods to deal with the unique properties of multiobjective error minimization that are not apparent in the uni-objective case are presented and compared on synthetic data, with a novel method based on bootstrapping of the training data shown to significantly improve generalization ability. Finally experimental evidence is presented in this study demonstrating the general application potential of the framework by generating populations of ENNs for forecasting 37 different international stock indexes.