Fast SDP Relaxations of Graph Cut Clustering, Transduction, and Other Combinatorial Problems
The Journal of Machine Learning Research
Necessary and sufficient global optimality conditions for NLP reformulations of linear SDP problems
Journal of Global Optimization
A second-order cone cutting surface method: complexity and application
Computational Optimization and Applications
Spectral Regularization Algorithms for Learning Large Incomplete Matrices
The Journal of Machine Learning Research
The minimum-rank gram matrix completion via modified fixed point continuation method
Proceedings of the 36th international symposium on Symbolic and algebraic computation
Active subspace: Toward scalable low-rank learning
Neural Computation
Computing real solutions of polynomial systems via low-rank moment matrix completion
Proceedings of the 37th International Symposium on Symbolic and Algebraic Computation
Low-rank quadratic semidefinite programming
Neurocomputing
Hi-index | 0.00 |
The low-rank semidefinite programming problem LRSDPr is a restriction of the semidefinite programming problem SDP in which a bound r is imposed on the rank of X, and it is well known that LRSDPr is equivalent to SDP if r is not too small. In this paper, we classify the local minima of LRSDPr and prove the optimal convergence of a slight variant of the successful, yet experimental, algorithm of Burer and Monteiro [5], which handles LRSDPr via the nonconvex change of variables X=RRT. In addition, for particular problem classes, we describe a practical technique for obtaining lower bounds on the optimal solution value during the execution of the algorithm. Computational results are presented on a set of combinatorial optimization relaxations, including some of the largest quadratic assignment SDPs solved to date.