Updating the inverse of a matrix
SIAM Review
New upper bounds in Klee's measure problem
SIAM Journal on Computing
Kalman filtering: theory and practice
Kalman filtering: theory and practice
Evolutionary computation: toward a new philosophy of machine intelligence
Evolutionary computation: toward a new philosophy of machine intelligence
The theory of evolution strategies
The theory of evolution strategies
Reinforcement Learning
Multi-Objective Optimization Using Evolutionary Algorithms
Multi-Objective Optimization Using Evolutionary Algorithms
Noisy Local Optimization with Evolution Strategies
Noisy Local Optimization with Evolution Strategies
Evolution strategies –A comprehensive introduction
Natural Computing: an international journal
A Comparison of Evolution Strategies with Other Direct Search Methods in the Presence of Noise
Computational Optimization and Applications
Fitness Distance Correlation as a Measure of Problem Difficulty for Genetic Algorithms
Proceedings of the 6th International Conference on Genetic Algorithms
Proceedings of the 6th International Conference on Genetic Algorithms
Multiobjective Optimization Using Evolutionary Algorithms - A Comparative Case Study
PPSN V Proceedings of the 5th International Conference on Parallel Problem Solving from Nature
On classes of functions for which No Free Lunch results hold
Information Processing Letters
Learning probability distributions in continuous evolutionary algorithms– a comparative review
Natural Computing: an international journal
Evolutionary Adaptation of Nonlinear Dynamical Systems in Computational Neuroscience
Genetic Programming and Evolvable Machines
Making Driver Modeling Attractive
IEEE Intelligent Systems
Convergence results for the (1, λ)-SA-ES using the theory of ϕ-irreducible Markov chains
Theoretical Computer Science
Completely Derandomized Self-Adaptation in Evolution Strategies
Evolutionary Computation
A computational efficient covariance matrix update and a (1+1)-CMA for evolution strategies
Proceedings of the 8th annual conference on Genetic and evolutionary computation
Evolutionary learning with kernels: a generic solution for large margin problems
Proceedings of the 8th annual conference on Genetic and evolutionary computation
Weighted multirecombination evolution strategies
Theoretical Computer Science - Foundations of genetic algorithms
How the (1 + 1) ES using isotropic mutations minimizes positive definite quadratic forms
Theoretical Computer Science - Foundations of genetic algorithms
Multi-Objective Machine Learning (Studies in Computational Intelligence) (Studies in Computational Intelligence)
Covariance Matrix Adaptation for Multi-objective Optimization
Evolutionary Computation
Reducing the space-time complexity of the CMA-ES
Proceedings of the 9th annual conference on Genetic and evolutionary computation
Controlling overfitting with multi-objective support vector machines
Proceedings of the 9th annual conference on Genetic and evolutionary computation
Evolutionary reinforcement learning of artificial neural networks
International Journal of Hybrid Intelligent Systems - Hybridization of Intelligent Systems
The Journal of Machine Learning Research
Lower Bounds for Hit-and-Run Direct Search
SAGA '07 Proceedings of the 4th international symposium on Stochastic Algorithms: Foundations and Applications
IEEE Transactions on Evolutionary Computation - Special issue on computational finance and economics
Evolutionary tuning of multiple SVM parameters
Neurocomputing
Steady-state selection and efficient covariance matrix update in the multi-objective CMA-ES
EMO'07 Proceedings of the 4th international conference on Evolutionary multi-criterion optimization
Comparing the niches of CMA-ES, CHC and pattern search using diverse benchmarks
PPSN'06 Proceedings of the 9th international conference on Parallel Problem Solving from Nature
Evolutionary optimization of sequence kernels for detection of bacterial gene starts
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part II
Efficient non-linear control through neuroevolution
ECML'06 Proceedings of the 17th European conference on Machine Learning
An EMO algorithm using the hypervolume measure as selection criterion
EMO'05 Proceedings of the Third international conference on Evolutionary Multi-Criterion Optimization
Multi-objective model selection for support vector machines
EMO'05 Proceedings of the Third international conference on Evolutionary Multi-Criterion Optimization
Solving rotated multi-objective optimization problems using differential evolution
AI'04 Proceedings of the 17th Australian joint conference on Advances in Artificial Intelligence
A fast and elitist multiobjective genetic algorithm: NSGA-II
IEEE Transactions on Evolutionary Computation
Performance assessment of multiobjective optimizers: an analysis and review
IEEE Transactions on Evolutionary Computation
IEEE Transactions on Evolutionary Computation
Lower bounds for randomized direct search with isotropic sampling
Operations Research Letters
Hoeffding and Bernstein races for selecting policies in evolutionary direct policy search
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Proceedings of the 11th Annual conference on Genetic and evolutionary computation
Uncertainty handling CMA-ES for reinforcement learning
Proceedings of the 11th Annual conference on Genetic and evolutionary computation
Neuroevolution strategies for episodic reinforcement learning
Journal of Algorithms
Active covariance matrix adaptation for the (1+1)-CMA-ES
Proceedings of the 12th annual conference on Genetic and evolutionary computation
Improved step size adaptation for the MO-CMA-ES
Proceedings of the 12th annual conference on Genetic and evolutionary computation
Proceedings of the 12th annual conference companion on Genetic and evolutionary computation
Efficient update of the covariance matrix inverse in iterated linear discriminant analysis
Pattern Recognition Letters
Bidirectional relation between CMA evolution strategies and natural evolution strategies
PPSN'10 Proceedings of the 11th international conference on Parallel problem solving from nature: Part I
Tight bounds for the approximation ratio of the hypervolume indicator
PPSN'10 Proceedings of the 11th international conference on Parallel problem solving from nature: Part I
An improved evolution strategy for constrained circle packing problem
ICIC'10 Proceedings of the 6th international conference on Advanced intelligent computing theories and applications: intelligent computing
An efficient algorithm for computing hypervolume contributions**
Evolutionary Computation
The logarithmic hypervolume indicator
Proceedings of the 11th workshop proceedings on Foundations of genetic algorithms
Real-time estimation of optical flow based on optimized haar wavelet features
EMO'11 Proceedings of the 6th international conference on Evolutionary multi-criterion optimization
Convergence of hypervolume-based archiving algorithms I: effectiveness
Proceedings of the 13th annual conference on Genetic and evolutionary computation
Weighted preferences in evolutionary multi-objective optimization
AI'11 Proceedings of the 24th international conference on Advances in Artificial Intelligence
A (1+1)-CMA-ES for constrained optimisation
Proceedings of the 14th annual conference on Genetic and evolutionary computation
Approximation quality of the hypervolume indicator
Artificial Intelligence
Evolutionary algorithm characterization in real parameter optimization problems
Applied Soft Computing
Toward nonlinear local reinforcement learning rules through neuroevolution
Neural Computation
Speeding up many-objective optimization by Monte Carlo approximations
Artificial Intelligence
Hi-index | 0.00 |
Randomized direct search algorithms for continuous domains, such as evolution strategies, are basic tools in machine learning. They are especially needed when the gradient of an objective function (e.g., loss, energy, or reward function) cannot be computed or estimated efficiently. Application areas include supervised and reinforcement learning as well as model selection. These randomized search strategies often rely on normally distributed additive variations of candidate solutions. In order to efficiently search in non-separable and ill-conditioned landscapes the covariance matrix of the normal distribution must be adapted, amounting to a variable metric method. Consequently, covariance matrix adaptation (CMA) is considered state-of-the-art in evolution strategies. In order to sample the normal distribution, the adapted covariance matrix needs to be decomposed, requiring in general 驴(n 3) operations, where n is the search space dimension. We propose a new update mechanism which can replace a rank-one covariance matrix update and the computationally expensive decomposition of the covariance matrix. The newly developed update rule reduces the computational complexity of the rank-one covariance matrix adaptation to 驴(n 2) without resorting to outdated distributions. We derive new versions of the elitist covariance matrix adaptation evolution strategy (CMA-ES) and the multi-objective CMA-ES. These algorithms are equivalent to the original procedures except that the update step for the variable metric distribution scales better in the problem dimension. We also introduce a simplified variant of the non-elitist CMA-ES with the incremental covariance matrix update and investigate its performance. Apart from the reduced time-complexity of the distribution update, the algebraic computations involved in all new algorithms are simpler compared to the original versions. The new update rule improves the performance of the CMA-ES for large scale machine learning problems in which the objective function can be evaluated fast.