A variation on Karmarkar's algorithm for solving linear programming problems
Mathematical Programming: Series A and B
Response surfaces: designs and analyses
Response surfaces: designs and analyses
Simulation and optimization in production planning: a case study
Decision Support Systems
Stochastic optimization and the simultaneous perturbation method
Proceedings of the 31st conference on Winter simulation: Simulation---a bridge to the future - Volume 1
Trust-region methods
A framework for Response Surface Methodology for simulation optimization
Proceedings of the 32nd conference on Winter simulation
Response Surface Methodology: Process and Product in Optimization Using Designed Experiments
Response Surface Methodology: Process and Product in Optimization Using Designed Experiments
Simulation Modeling and Analysis
Simulation Modeling and Analysis
Global Optimization of Stochastic Black-Box Systems via Sequential Kriging Meta-Models
Journal of Global Optimization
State-of-the-Art Review: A User's Guide to the Brave New World of Designing Simulation Experiments
INFORMS Journal on Computing
Regression models and experimental designs: a tutorial for simulation analysts
Proceedings of the 39th conference on Winter simulation: 40 years! The best is yet to come
Paintshop production line optimization using response surface methodology
Proceedings of the 39th conference on Winter simulation: 40 years! The best is yet to come
Hi-index | 0.00 |
Response Surface Methodology (RSM) searches for the input combination that optimizes the simulation output. RSM treats the simulation model as a black box. Moreover, this paper assumes that simulation requires much computer time. In the first stages of its search, RSM locally fits first-order polynomials. Next, classic RSM uses steepest descent (SD); unfortunately, SD is scale dependent. Therefore, Part 1 of this paper derives scale independent 'adapted' SD (ASD) accounting for covariances between components of the local gradient. Monte Carlo experiments show that ASD indeed gives a better search direction than SD. Part 2 considers multiple outputs, optimizing a stochastic objective function under stochastic and deterministic constraints. This part uses interior point methods and binary search, to derive a scale independent search direction and several step sizes in that direction. Monte Carlo examples demonstrate that a neighborhood of the true optimum can indeed be reached, in a few simulation runs.