Shuffled complex evolution approach for effective and efficient global minimization
Journal of Optimization Theory and Applications
Computing the Initial Temperature of Simulated Annealing
Computational Optimization and Applications
Inverse Problem Theory and Methods for Model Parameter Estimation
Inverse Problem Theory and Methods for Model Parameter Estimation
PGO: A parallel computing platform for global optimization based on genetic algorithm
Computers & Geosciences
Parameter range reduction for ODE models using cumulative backward differentiation formulas
Journal of Computational and Applied Mathematics
Column and batch reactive transport experiment parameter estimation using a genetic algorithm
Computers & Geosciences
Brief paper: Rigorous parameter reconstruction for differential equations with noisy data
Automatica (Journal of IFAC)
Dimensionality reduction and polynomial chaos acceleration of Bayesian inference in inverse problems
Journal of Computational Physics
Parameter Setting in Evolutionary Algorithms
Parameter Setting in Evolutionary Algorithms
Integrated conditional global optimisation for discrete fracture network modelling
Computers & Geosciences
Benchmarking Derivative-Free Optimization Algorithms
SIAM Journal on Optimization
A benchmarking framework for simulation-based optimization of environmental models
Environmental Modelling & Software
An improved genetic algorithm for rainfall-runoff model calibration and function optimization
Mathematical and Computer Modelling: An International Journal
Parameter range reduction for ODE models using monotonic discretizations
Journal of Computational and Applied Mathematics
Hi-index | 0.00 |
The parameters of environmental simulation models are often inferred by minimizing differences between simulated output and observed data. Heuristic global search algorithms are a popular choice for performing minimization but many algorithms yield lackluster results when computational budgets are restricted, as is often required in practice. One way for improving performance is to limit the search domain by reducing upper and lower parameter bounds. While such range reduction is typically done prior to optimization, this study examined strategies for contracting parameter bounds during optimization. Numerical experiments evaluated a set of novel ''telescoping'' strategies that work in conjunction with a given optimizer to scale parameter bounds in accordance with the remaining computational budget. Various telescoping functions were considered, including a linear scaling of the bounds, and four nonlinear scaling functions that more aggressively reduce parameter bounds either early or late in the optimization. Several heuristic optimizers were integrated with the selected telescoping strategies and applied to numerous optimization test functions as well as calibration problems involving four environmental simulation models. The test suite ranged from simple 2-parameter surfaces to complex 100-parameter landscapes, facilitating robust comparisons of the selected optimizers across a variety of restrictive computational budgets. All telescoping strategies generally improved the performance of the selected optimizers, relative to baseline experiments that used no bounds reduction. Performance improvements varied but were as high as 38% for a real-coded genetic algorithm (RGA), 21% for shuffled complex evolution (SCE), 16% for simulated annealing (SA), 8% for particle swarm optimization (PSO), and 7% for dynamically dimensioned search (DDS). Inter-algorithm comparisons suggest that the SCE and DDS algorithms delivered the best overall performance. SCE appears well-suited for solving low-dimensional problems using a moderate computational budget, while DDS appears better suited for solving high-dimensional problems using a restricted computational budget.