Swarm intelligence
Journal of Global Optimization
Introduction to Evolutionary Computing
Introduction to Evolutionary Computing
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Dynamic GP models: an overview and recent developments
ASM'12 Proceedings of the 6th international conference on Applied Mathematics, Simulation, Modelling
Hi-index | 0.00 |
Gaussian process (GP) models are non-parametric, blackbox models that represent a new method for system identification. The optimization of GP models, due to their probabilistic nature, is based on maximization of the probability of the model. This probability can be calculated by the marginal likelihood. Commonly used approaches for maximizing the marginal likelihood of GP models are the deterministic optimization methods. However, their success critically depends on the initial values. In addition, the marginal likelihood function often has a lot of local minima in which the deterministic method can be trapped. Therefore, stochastic optimization methods can be considered as an alternative approach. In this paper we test their applicability in GP model optimization.We performed a comparative study of three stochastic algorithms: the genetic algorithm, differential evolution, and particle swarm optimization. Empirical tests were carried out on a benchmark problem of modeling the concentration of CO2 in the atmosphere. The results indicate that with proper tuning differential evolution and particle swarm optimization significantly outperform the conjugate gradient method.