Non-negative Sparse Principal Component Analysis for Multidimensional Constrained Optimization
PRICAI '08 Proceedings of the 10th Pacific Rim International Conference on Artificial Intelligence: Trends in Artificial Intelligence
Domain adaptation in regression
ALT'11 Proceedings of the 22nd international conference on Algorithmic learning theory
Minimizing the Condition Number of a Gram Matrix
SIAM Journal on Optimization
Approximating Semidefinite Packing Programs
SIAM Journal on Optimization
Feasible and accurate algorithms for covering semidefinite programs
SWAT'10 Proceedings of the 12th Scandinavian conference on Algorithm Theory
Distance metric learning with eigenvalue optimization
The Journal of Machine Learning Research
Convex approximations to sparse PCA via Lagrangian duality
Operations Research Letters
Optimizing over the growing spectrahedron
ESA'12 Proceedings of the 20th Annual European conference on Algorithms
Computational Optimization and Applications
Weak Recovery Conditions from Graph Partitioning Bounds and Order Statistics
Mathematics of Operations Research
Domain adaptation and sample bias correction theory and algorithm for regression
Theoretical Computer Science
Hi-index | 0.00 |
In this paper we extend the smoothing technique (Nesterov in Math Program 103(1): 127–152, 2005; Nesterov in Unconstrained convex mimimization in relative scale, 2003) onto the problems of semidefinite optimization. For that, we develop a simple framework for estimating a Lipschitz constant for the gradient of some symmetric functions of eigenvalues of symmetric matrices. Using this technique, we can justify the Lipschitz constants for some natural approximations of maximal eigenvalue and the spectral radius of symmetric matrices. We analyze the efficiency of the special gradient-type schemes on the problems of minimizing the maximal eigenvalue or the spectral radius of the matrix, which depends linearly on the design variables. We show that in the first case the number of iterations of the method is bounded by $$O({1}/{\epsilon})$$, where $$\epsilon$$ is the required absolute accuracy of the problem. In the second case, the number of iterations is bounded by $${({4}/{\delta})} \sqrt{(1 + \delta) r\, \ln r }$$, where δ is the required relative accuracy and r is the maximal rank of corresponding linear matrix inequality. Thus, the latter method is a fully polynomial approximation scheme.