Computing a Nearest Correlation Matrix with Factor Structure

  • Authors:
  • Rüdiger Borsdorf;Nicholas J. Higham;Marcos Raydan

  • Affiliations:
  • borsdorf@maths.man.ac.uk and higham@maths.man.ac.uk;-;mraydan@usb.ve

  • Venue:
  • SIAM Journal on Matrix Analysis and Applications
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

An $n\times n$ correlation matrix has $k$ factor structure if its off-diagonal agrees with that of a rank $k$ matrix. Such correlation matrices arise, for example, in factor models of collateralized debt obligations (CDOs) and multivariate time series. We analyze the properties of these matrices and, in particular, obtain an explicit formula for the rank in the one factor case. Our main focus is on the nearness problem of finding the nearest $k$ factor correlation matrix $C(X) = \diag(I-XX^T) + XX^T$ to a given symmetric matrix, subject to natural nonlinear constraints on the elements of the $n\times k$ matrix $X$, where distance is measured in the Frobenius norm. For a special one parameter case we obtain an explicit solution. For the general $k$ factor case we obtain the gradient and Hessian of the objective function and derive an instructive result on the positive definiteness of the Hessian when $k=1$. We investigate several numerical methods for solving the nearness problem: the alternating directions method; a principal factors method used by Anderson, Sidenius, and Basu in the CDO application, which we show is equivalent to the alternating projections method and lacks convergence results; the spectral projected gradient method of Birgin, Martínez, and Raydan; and Newton and sequential quadratic programming methods. The methods differ in whether or not they can take account of the nonlinear constraints and in their convergence properties. Our numerical experiments show that the performance of the methods depends strongly on the problem, but that the spectral projected gradient method is the clear winner.