Variable Sparsity Kernel Learning
The Journal of Machine Learning Research
Approximating Semidefinite Packing Programs
SIAM Journal on Optimization
A new approach to computing maximum flows using electrical flows
Proceedings of the forty-fifth annual ACM symposium on Theory of computing
Algorithms and hardness results for parallel large margin learning
The Journal of Machine Learning Research
Hi-index | 0.00 |
We show that the optimal complexity of Nesterov's smooth first-order optimization algorithm is preserved when the gradient is computed only up to a small, uniformly bounded error. In applications of this method to semidefinite programs, this means in some instances computing only a few leading eigenvalues of the current iterate instead of a full matrix exponential, which significantly reduces the method's computational cost. This also allows sparse problems to be solved efficiently using sparse maximum eigenvalue packages.