Acceleration of genetic algorithms for sudoku solution on many-core processors
Proceedings of the 13th annual conference companion on Genetic and evolutionary computation
Parallelization of genetic operations that takes building-block linkage into account
Artificial Life and Robotics
Speeding up model building for ECGA on CUDA platform
Proceedings of the 15th annual conference on Genetic and evolutionary computation
Hi-index | 0.00 |
General Purpose computing over Graphical Processing Units (GPGPUs) is a huge shift of paradigm in parallel computing that promises a dramatic increase in performance. But GPGPUs also bring an unprecedented level of complexity in algorithmic design and software development. In this paper we describe the challenges and design choices involved in parallelization of Bayesian Optimization Algorithm (BOA) to solve complex combinatorial optimization problems over nVidia commodity graphics hardware using Compute Unified Device Architecture (CUDA). BOA is a well-known multivariate Estimation of Distribution Algorithm (EDA) that incorporates methods for learning Bayesian Network (BN). It then uses BN to sample new promising solutions. Our implementation is fully compatible with modern commodity GPUs and therefore we call it gBOA (BOA on GPU). In the results section, we show several numerical tests and performance measurements obtained by running gBOA over an nVidia Tesla C1060 GPU. We show that in the best case we can obtain a speedup of up to 13x.