Diffusion for global optimization in Rn
SIAM Journal on Control and Optimization
SIAM Journal on Applied Mathematics
Cooling schedules for optimal annealing
Mathematics of Operations Research
Recursive stochastic algorithms for global optimization in Rd
SIAM Journal on Control and Optimization
A globally convergent stochastic approximation
SIAM Journal on Control and Optimization
Metropolis-type annealing algorithms for global optimization in Rd
SIAM Journal on Control and Optimization
An introduction to genetic algorithms
An introduction to genetic algorithms
Weighted Means in Stochastic Approximation of Minima
SIAM Journal on Control and Optimization
Annealing of Iterative Stochastic Schemes
SIAM Journal on Control and Optimization
Rates of Convergence for a Class of Global Stochastic Optimization Algorithms
SIAM Journal on Optimization
Global Stochastic Optimization with Low-Dispersion Point Sets
Operations Research
Comparative study of stochastic algorithms for system optimization based on gradient approximations
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Waveform diversity via mutual information
Digital Signal Processing
Stochastic approximation driven particle swarm optimization
IIT'09 Proceedings of the 6th international conference on Innovations in information technology
Geometry construction from caustic images
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part V
EvoApplicatons'10 Proceedings of the 2010 international conference on Applications of Evolutionary Computation - Volume Part I
Hi-index | 0.00 |
A desire with iterative optimization techniques is that the algorithm reach the global optimum rather than get stranded at a local optimum value. Here, we examine the global convergence properties of a "gradient free" stochastic approximation algorithm called "SPSA," that has performed well in complex optimization problems. We establish two theorems on the global convergence of SPSA. The first provides conditions under which SPSA will converge in probability to a global optimum using the well-known method of injected noise. In the second theorem, we show that, under different conditions, "basic" SPSA without injected noise can achieve convergence in probability to a global optimum. This latter result can have important benefits in the setup (tuning) and performance of the algorithm. The discussion is supported by numerical studies showing favorable comparisons of SPSA to simulated annealing and genetic algorithms.