Parallel and distributed computation: numerical methods
Parallel and distributed computation: numerical methods
Convergent activation dynamics in continuous time networks
Neural Networks
A one-measurement form of simultaneous perturbation stochastic approximation
Automatica (Journal of IFAC)
Some Pathological Traps for Stochastic Approximation
SIAM Journal on Control and Optimization
Simulation and the Monte Carlo Method
Simulation and the Monte Carlo Method
ACM Transactions on Modeling and Computer Simulation (TOMACS)
ACM Transactions on Modeling and Computer Simulation (TOMACS)
Comparative study of stochastic algorithms for system optimization based on gradient approximations
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
A probabilistic constrained nonlinear optimization framework to optimize RED parameters
Performance Evaluation
Optimal parameter trajectory estimation in parameterized SDEs: An algorithmic procedure
ACM Transactions on Modeling and Computer Simulation (TOMACS)
Natural actor-critic algorithms
Automatica (Journal of IFAC)
Automatica (Journal of IFAC)
Stochastic approximation algorithms for constrained optimization via simulation
ACM Transactions on Modeling and Computer Simulation (TOMACS)
ICDCIT'11 Proceedings of the 7th international conference on Distributed computing and internet technology
Expert Systems with Applications: An International Journal
Optimal multi-layered congestion based pricing schemes for enhanced QoS
Computer Networks: The International Journal of Computer and Telecommunications Networking
Hi-index | 0.02 |
In this article, we present three smoothed functional (SF) algorithms for simulation optimization. While one of these estimates only the gradient by using a finite difference approximation with two parallel simulations, the other two are adaptive Newton-based stochastic approximation algorithms that estimate both the gradient and Hessian. One of the Newton-based algorithms uses only one simulation and has a one-sided estimate in both the gradient and Hessian, while the other uses two-sided estimates in both quantities and requires two simulations. For obtaining gradient and Hessian estimates, we perturb each parameter component randomly using independent and identically distributed (i.i.d) Gaussian random variates. The earlier SF algorithms in the literature only estimate the gradient of the objective function. Using similar techniques, we derive two unbiased SF-based estimators for the Hessian and develop suitable three-timescale stochastic approximation procedures for simulation optimization. We present a detailed convergence analysis of our algorithms and show numerical experiments with parameters of dimension 50 on a setting involving a network of M/G/1 queues with feedback. We compare the performance of our algorithms with related algorithms in the literature. While our two-simulation Newton-based algorithm shows the best results overall, our one-simulation algorithm shows better performance compared to other one-simulation algorithms.