Incremental Gradient Algorithms with Stepsizes Bounded Away from Zero
Computational Optimization and Applications
The Incremental Gauss-Newton Algorithm with Adaptive Stepsize Rule
Computational Optimization and Applications
Adaptive Processing over Distributed Networks
IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences
Normalized incremental subgradient algorithm and its application
IEEE Transactions on Signal Processing
Incremental Subgradients for Constrained Convex Optimization: A Unified Framework and New Methods
SIAM Journal on Optimization
Towards a unified architecture for in-RDBMS analytics
SIGMOD '12 Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data
Adaptive image synthesis for compressive displays
ACM Transactions on Graphics (TOG) - SIGGRAPH 2013 Conference Proceedings
Hi-index | 0.00 |
The least mean squares (LMS) method for linear least squares problems differs from the steepest descent method in that it processes data blocks one-by-one, with intermediate adjustment of the parameter vector under optimization. This mode of operation often leads to faster convergence when far from the eventual limit and to slower (sublinear) convergence when close to the optimal solution. We embed both LMS and steepest descent, as well as other intermediate methods, within a one-parameter class of algorithms, and we propose a hybrid class of methods that combine the faster early convergence rate of LMS with the faster ultimate linear convergence rate of steepest descent. These methods are well suited for neural network training problems with large data sets. Furthermore, these methods allow the effective use of scaling based, for example, on diagonal or other approximations of the Hessian matrix.