A massively parallel architecture for distributed genetic algorithms

  • Authors:
  • Sven E. Eklund

  • Affiliations:
  • Dalarna University, S-781 88 Borlange, Sweden

  • Venue:
  • Parallel Computing - Special issue: Parallel and nature-inspired computational paradigms and applications
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Genetic algorithms are a group of stochastic search algorithms with a broad field of application. Although highly successful in many fields, genetic algorithms in general suffer from long execution times. In this article we describe parallel models for genetic algorithms in general and the massively parallel Diffusion Model in particular, in order to speedup the execution.Implemented in hardware, the Diffusion Model constitutes an efficient, flexible, scalable and mobile machine learning system. This fine-grained system consists of a large number of processing nodes that evolve a large number of small, overlapping subpopulations. Every processing node has an embedded CPU that executes a linear machine code representation at a rate of up to 20,000 generations per second.Besides being efficient, implemented in hardware this model is highly portable and applicable to mobile, on-line applications. The architecture is also scalable so that larger problems can be addressed with a system with more processing nodes. Finally, the use of linear machine code as genetic programming representation and VHDL as hardware description language, makes the system highly flexible and easy to adapt to different applications.Through a series of experiments we determine the settings of the most important parameters of the Diffusion Model. We also demonstrate the effectiveness and flexibility of the architecture on a set of regression problems, a classification application and a time series forecasting application.