Robust extraction of spatial correlation
Proceedings of the 2006 international symposium on Physical design
A multidimensional scaling approach for representing XML documents
ACM-SE 45 Proceedings of the 45th annual southeast regional conference
SimpleNPKL: simple non-parametric kernel learning
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Correlation stress testing for value-at-risk: an unconstrained convex optimization approach
Computational Optimization and Applications
Non-parametric kernel ranking approach for social image retrieval
Proceedings of the ACM International Conference on Image and Video Retrieval
Computers & Mathematics with Applications
Calibrating Least Squares Semidefinite Programming with Equality and Inequality Constraints
SIAM Journal on Matrix Analysis and Applications
Simulating cointegrated time series
Winter Simulation Conference
A Family of Simple Non-Parametric Kernel Learning Algorithms
The Journal of Machine Learning Research
Block relaxation and majorization methods for the nearest correlation matrix with factor structure
Computational Optimization and Applications
Solving Large-Scale Least Squares Semidefinite Programming by Alternating Direction Methods
SIAM Journal on Matrix Analysis and Applications
Dual approaches to finite element model updating
Journal of Computational and Applied Mathematics
A projected semismooth Newton method for problems of calibrating least squares covariance matrix
Operations Research Letters
A Sequential Semismooth Newton Method for the Nearest Low-rank Correlation Matrix Problem
SIAM Journal on Optimization
Nonstationary modeling for multivariate spatial processes
Journal of Multivariate Analysis
Approximation of rank function and its application to the nearest low-rank correlation matrix
Journal of Global Optimization
Hi-index | 0.00 |
We consider the problem of finding the smallest adjustment to a given symmetric $n \times n$ matrix, as measured by the Euclidean or Frobenius norm, so that it satisfies some given linear equalities and inequalities, and in addition is positive semidefinite. This least-squares covariance adjustment problem is a convex optimization problem, and can be efficiently solved using standard methods when the number of variables (i.e., entries in the matrix) is modest, say, under $1000$. Since the number of variables is $n(n+1)/2$, this corresponds to a limit around $n=45$. Malick [{\it SIAM J. Matrix Anal.\ Appl.,} 26 (2005), pp. 272--284] studies a closely related problem and calls it the semidefinite least-squares problem. In this paper we formulate a dual problem that has no matrix inequality or matrix variables, and a number of (scalar) variables equal to the number of equality and inequality constraints in the original least-squares covariance adjustment problem. This dual problem allows us to solve far larger least-squares covariance adjustment problems than would be possible using standard methods. Assuming a modest number of constraints, problems with $n=1000$ are readily solved by the dual method. The dual method coincides with the dual method proposed by Malick when there are no inequality constraints and can be obtained as an extension of his dual method when there are inequality constraints. Using the dual problem, we show that in many cases the optimal solution is a low rank update of the original matrix. When the original matrix has structure, such as sparsity, this observation allows us to solve very large least-squares covariance adjustment problems.