Practical methods of optimization; (2nd ed.)
Practical methods of optimization; (2nd ed.)
Efficient line search algorithm for unconstrained optimization
Journal of Optimization Theory and Applications
On an Application of Dynamic Programming to the Synthesis of Logical Systems
Journal of the ACM (JACM)
A survey of ranking, selection, and multiple comparison procedures for discrete-event simulation
Proceedings of the 31st conference on Winter simulation: Simulation---a bridge to the future - Volume 1
On the Rate of Convergence of Optimal Solutions of Monte Carlo Approximations of Stochastic Programs
SIAM Journal on Optimization
Ranking and Selection for Steady-State Simulation: Procedures and Perspectives
INFORMS Journal on Computing
Dual adaptive dynamic control of mobile robots using neural networks
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on human computing
Hi-index | 22.14 |
Practical exploitation of optimal dual control (ODC) theory continues to be hindered by the difficulties involved in numerically solving the associated stochastic dynamic programming (SDPs) problems. In particular, high-dimensional hyper-states coupled with the nesting of optimizations and integrations within these SDP problems render their exact numerical solution computationally prohibitive. This paper presents a new stochastic dynamic programming algorithm that uses a Monte Carlo approach to circumvent the need for numerical integration, thereby dramatically reducing computational requirements. Also, being a generalization of iterative dynamic programming (IDP) to the stochastic domain, the new algorithm exhibits reduced sensitivity to the hyper-state dimension and, consequently, is particularly well suited to solution of ODC problems. A convergence analysis of the new algorithm is provided, and its benefits are illustrated on the problem of ODC of an integrator with unknown gain, originally presented by Astrom and Helmersson (Computers and Mathematics with Applications 12A (1986) 653-662).