Convergent activation dynamics in continuous time networks
Neural Networks
Stochastic approximation with two time scales
Systems & Control Letters
Automatica (Journal of IFAC)
Some Pathological Traps for Stochastic Approximation
SIAM Journal on Control and Optimization
Neuro-Dynamic Programming
Proceedings of the 33nd conference on Winter simulation
ACM Transactions on Modeling and Computer Simulation (TOMACS)
Adaptive Newton-based multivariate smoothed functional algorithms for simulation optimization
ACM Transactions on Modeling and Computer Simulation (TOMACS)
Hi-index | 0.00 |
We develop four algorithms for simulation-based optimization under multiple inequality constraints. Both the cost and the constraint functions are considered to be long-run averages of certain state-dependent single-stage functions. We pose the problem in the simulation optimization framework by using the Lagrange multiplier method. Two of our algorithms estimate only the gradient of the Lagrangian, while the other two estimate both the gradient and the Hessian of it. In the process, we also develop various new estimators for the gradient and Hessian. All our algorithms use two simulations each. Two of these algorithms are based on the smoothed functional (SF) technique, while the other two are based on the simultaneous perturbation stochastic approximation (SPSA) method. We prove the convergence of our algorithms and show numerical experiments on a setting involving an open Jackson network. The Newton-based SF algorithm is seen to show the best overall performance.