Using underapproximations for sparse nonnegative matrix factorization
Pattern Recognition
Structured Learning and Prediction in Computer Vision
Foundations and Trends® in Computer Graphics and Vision
Towards a unified architecture for in-RDBMS analytics
SIGMOD '12 Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data
A Simple but Usually Fast Branch-and-Bound Algorithm for the Capacitated Facility Location Problem
INFORMS Journal on Computing
Hazy: making it easier to build and maintain big-data analytics
Communications of the ACM
Hazy: Making it Easier to Build and Maintain Big-data Analytics
Queue - Web Development
An infeasible-point subgradient method using adaptive approximate projections
Computational Optimization and Applications
Hi-index | 0.02 |
The subgradient method is both a heavily employed and widely studied algorithm for non-differentiable optimization. Nevertheless, there are some basic properties of subgradient optimization that, while “well known” to specialists, seem to be rather poorly known in the larger optimization community. This note concerns two such properties, both applicable to subgradient optimization using the divergent series steplength rule. The first involves convergence of the iterative process, and the second deals with the construction of primal estimates when subgradient optimization is applied to maximize the Lagrangian dual of a linear program. The two topics are related in that convergence of the iterates is required to prove correctness of the primal construction scheme.