A Parallel Gradient Descent Method for Learning in Analog VLSI Neural Networks
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Nonminimax Filtering in Unknown Irregular Constrained Observation Noise
Automation and Remote Control
Optimal Convergence Rate of the Randomized Algorithms of Stochastic Approximation in Arbitrary Noise
Automation and Remote Control
Online convex optimization in the bandit setting: gradient descent without a gradient
SODA '05 Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms
Automation and Remote Control
Precision of estimation of SPSA algorithm with two measurements
ACMOS'06 Proceedings of the 8th WSEAS international conference on Automatic control, modeling & simulation
Hi-index | 0.01 |
New algorithms for stochastic approximation under input disturbance are designed. For the multidimensional case, they are simple in form, generate consistent estimates for unknown parameters under “almost arbitrary” disturbances, and are easily “incorporated” in the design of quantum devices for estimating the gradient vector of a function of several variables.