Matrix analysis
Algorithm 813: SPG—Software for Convex-Constrained Optimization
ACM Transactions on Mathematical Software (TOMS)
Optimization by Vector Space Methods
Optimization by Vector Space Methods
On the Convergence of Pattern Search Algorithms
SIAM Journal on Optimization
Nonmonotone Spectral Projected Gradient Methods on Convex Sets
SIAM Journal on Optimization
A Quadratically Convergent Newton Method for Computing the Nearest Correlation Matrix
SIAM Journal on Matrix Analysis and Applications
Functions of Matrices: Theory and Computation (Other Titles in Applied Mathematics)
Functions of Matrices: Theory and Computation (Other Titles in Applied Mathematics)
Correlation stress testing for value-at-risk: an unconstrained convex optimization approach
Computational Optimization and Applications
Block relaxation and majorization methods for the nearest correlation matrix with factor structure
Computational Optimization and Applications
Covariance structure regularization via entropy loss function
Computational Statistics & Data Analysis
Hi-index | 0.00 |
An $n\times n$ correlation matrix has $k$ factor structure if its off-diagonal agrees with that of a rank $k$ matrix. Such correlation matrices arise, for example, in factor models of collateralized debt obligations (CDOs) and multivariate time series. We analyze the properties of these matrices and, in particular, obtain an explicit formula for the rank in the one factor case. Our main focus is on the nearness problem of finding the nearest $k$ factor correlation matrix $C(X) = \diag(I-XX^T) + XX^T$ to a given symmetric matrix, subject to natural nonlinear constraints on the elements of the $n\times k$ matrix $X$, where distance is measured in the Frobenius norm. For a special one parameter case we obtain an explicit solution. For the general $k$ factor case we obtain the gradient and Hessian of the objective function and derive an instructive result on the positive definiteness of the Hessian when $k=1$. We investigate several numerical methods for solving the nearness problem: the alternating directions method; a principal factors method used by Anderson, Sidenius, and Basu in the CDO application, which we show is equivalent to the alternating projections method and lacks convergence results; the spectral projected gradient method of Birgin, Martínez, and Raydan; and Newton and sequential quadratic programming methods. The methods differ in whether or not they can take account of the nonlinear constraints and in their convergence properties. Our numerical experiments show that the performance of the methods depends strongly on the problem, but that the spectral projected gradient method is the clear winner.