Parallel EDAs to create multivariate calibration models for quantitative chemical applications
Journal of Parallel and Distributed Computing - Special issue on parallel bioinspired algorithms
A parallel framework for loopy belief propagation
Proceedings of the 9th annual conference companion on Genetic and evolutionary computation
Sporadic model building for efficiency enhancement of the hierarchical BOA
Genetic Programming and Evolvable Machines
A novel EDAs based method for HP model protein folding
CEC'09 Proceedings of the Eleventh conference on Congress on Evolutionary Computation
On multivariate genetic systems
FUZZ-IEEE'09 Proceedings of the 18th international conference on Fuzzy Systems
Parallel probabilistic model-building genetic algorithms with elitism
ISCIT'09 Proceedings of the 9th international conference on Communications and information technologies
Porting Estimation of Distribution Algorithms to the Cell Broadband Engine
Parallel Computing
Artificial Intelligence in Medicine
Estimation of distribution algorithms: from available implementations to potential developments
Proceedings of the 13th annual conference companion on Genetic and evolutionary computation
Geometric-based sampling for permutation optimization
Proceedings of the 15th annual conference on Genetic and evolutionary computation
Information Sciences: an International Journal
Hi-index | 0.00 |
This paper proposes new parallel versions of some estimation of distribution algorithms (EDAs). Focus is on maintenance of the behavior of sequential EDAs that use probabilistic graphical models (Bayesian networks and Gaussian networks), implementing a master–slave workload distribution for the most computationally intensive phases: learning the probability distribution and, in one algorithm, “sampling and evaluation of individuals.” In discrete domains, we explain the parallelization of$ EBNA_ BIC$and$ EBNA_ PC$algorithms, while in continuous domains, the selected algorithms are$ EGNA_ BIC$and$ EGNA_ EE$. Implementation has been done using two APIs: message passing interface and POSIX threads. The parallel programs can run efficiently on a range of target parallel computers. Experiments to evaluate the programs in terms of speed up and efficiency have been carried out on a cluster of multiprocessors. Compared with the sequential versions, they show reasonable gains in terms of speed.