Enhancing data parallelism for Ant Colony Optimization on GPUs

  • Authors:
  • José M. Cecilia;José M. GarcíA;Andy Nisbet;Martyn Amos;Manuel UjaldóN

  • Affiliations:
  • Computer Architecture Department, University of Murcia, 30100 Murcia, Spain;Computer Architecture Department, University of Murcia, 30100 Murcia, Spain;Novel Computation Group, Division of Computing and IS, Manchester Metropolitan University, Manchester M1 5GD, UK;Novel Computation Group, Division of Computing and IS, Manchester Metropolitan University, Manchester M1 5GD, UK;Computer Architecture Department, University of Malaga, 29071 Málaga, Spain

  • Venue:
  • Journal of Parallel and Distributed Computing
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Graphics Processing Units (GPUs) have evolved into highly parallel and fully programmable architecture over the past five years, and the advent of CUDA has facilitated their application to many real-world applications. In this paper, we deal with a GPU implementation of Ant Colony Optimization (ACO), a population-based optimization method which comprises two major stages: tour construction and pheromone update. Because of its inherently parallel nature, ACO is well-suited to GPU implementation, but it also poses significant challenges due to irregular memory access patterns. Our contribution within this context is threefold: (1) a data parallelism scheme for tour construction tailored to GPUs, (2) novel GPU programming strategies for the pheromone update stage, and (3) a new mechanism called I-Roulette to replicate the classic roulette wheel while improving GPU parallelism. Our implementation leads to factor gains exceeding 20x for any of the two stages of the ACO algorithm as applied to the TSP when compared to its sequential counterpart version running on a similar single-threaded high-end CPU. Moreover, an extensive discussion focused on different implementation paths on GPUs shows the way to deal with parallel graph connected components. This, in turn, suggests a broader area of inquiry, where algorithm designers may learn to adapt similar optimization methods to GPU architecture.