Genetic Algorithms in Search, Optimization and Machine Learning
Genetic Algorithms in Search, Optimization and Machine Learning
Journal of Global Optimization
From Recombination of Genes to the Estimation of Distributions I. Binary Parameters
PPSN IV Proceedings of the 4th International Conference on Parallel Problem Solving from Nature
Derivative-Free Filter Simulated Annealing Method for Constrained Continuous Global Optimization
Journal of Global Optimization
Firefly algorithms for multimodal optimization
SAGA'09 Proceedings of the 5th international conference on Stochastic algorithms: foundations and applications
Computation of Capacity Benefit Margin using Differential Evolution
International Journal of Computing Science and Mathematics
Interpolated differential evolution for global optimisation problems
International Journal of Computing Science and Mathematics
Simulated annealing algorithm with adaptive neighborhood
Applied Soft Computing
Multi-agent simulated annealing algorithm based on particle swarm optimisation algorithm
International Journal of Computer Applications in Technology
Multi-agent simulated annealing algorithm based on differential evolution algorithm
International Journal of Bio-Inspired Computation
Estimation of distribution algorithms based on two copula selection methods
International Journal of Computing Science and Mathematics
An improved particle swarm optimisation for solving generalised travelling salesman problem
International Journal of Computing Science and Mathematics
Hi-index | 0.00 |
Canonical simulated annealing SA algorithm is extremely slow in convergence, and the implementation and efficiency of parallel SA algorithms are typically problem-dependent. Multi-agent SA MSA algorithms, which use learned knowledge to guide its sampling, can overcome such intrinsic limitations naturally. Learning strategy, which decides the representation, selection, and usage of knowledge, may affect the performance of MSA algorithms significantly. Using the current population as learned knowledge, we design three different knowledge selection schemes, selecting from better agents, selecting from worse agents and selecting from all agents randomly, to select knowledge to guide sampling. A differential perturbation operator is designed to generate candidate solution from the selected knowledge. Comparison was carried on four widely used benchmark functions, and the results show that learning-based MSA algorithm has good performance in terms of convergence speed and solution accuracy. Furthermore, simulation results also show that even learning from worse agents significantly outperforms not learning at all.