International Journal of Systems Science - The Seventh Portuguese Conference on Automatic Control (Controlo'2006)
Learning averages over the lie group of symmetric positive-definite matrices
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
An algorithm to compute averages on matrix Lie groups
IEEE Transactions on Signal Processing
The reference prior for complex covariance matrices with efficient implementation strategies
IEEE Transactions on Signal Processing
Riemannian Metric and Geometric Mean for Positive Semidefinite Matrices of Fixed Rank
SIAM Journal on Matrix Analysis and Applications
Regression on Fixed-Rank Positive Semidefinite Matrices: A Riemannian Approach
The Journal of Machine Learning Research
Horoball hulls and extents in positive definite space
WADS'11 Proceedings of the 12th international conference on Algorithms and data structures
Gauss---Newton method for convex composite optimizations on Riemannian manifolds
Journal of Global Optimization
Full Length Article: Information geometry of target tracking sensor networks
Information Fusion
Anisotropy Preserving DTI Processing
International Journal of Computer Vision
Hi-index | 35.69 |
Crame´r-Rao bounds on estimation accuracy are established for estimation problems on arbitrary manifolds in which no set of intrinsic coordinates exists. The frequently encountered examples of estimating either an unknown subspace or a covariance matrix are examined in detail. The set of subspaces, called the Grassmann manifold, and the set of covariance (positive-definite Hermitian) matrices have no fixed coordinate system associated with them and do not possess a vector space structure, both of which are required for deriving classical Crame´r-Rao bounds. Intrinsic versions of the Crame´r-Rao bound on manifolds utilizing an arbitrary affine connection with arbitrary geodesics are derived for both biased and unbiased estimators. In the example of covariance matrix estimation, closed-form expressions for both the intrinsic and flat bounds are derived and compared with the root-mean-square error (RMSE) of the sample covariance matrix (SCM) estimator for varying sample support K. The accuracy bound on unbiased covariance matrix estimators is shown to be about (10/log 10)n/K12/ dB, where n is the matrix order. Remarkably, it is shown that from an intrinsic perspective, the SCM is a biased and inefficient estimator and that the bias term reveals the dependency of estimation accuracy on sample support observed in theory and practice. The RMSE of the standard method of estimating subspaces using the singular value decomposition (SVD) is compared with the intrinsic subspace Crame´r-Rao bound derived in closed form by varying both the signal-to-noise ratio (SNR) of the unknown p-dimensional subspace and the sample support. In the simplest case, the Crame´r-Rao bound on subspace estimation accuracy is shown to be about (p(n-p))12-1/2/SNR-12/ rad for p-dimensional subspaces. It is seen that the SVD-based method yields accuracies very close to the Crame´r-Rao bound, establishing that the principal invariant subspace of a random sample provides an excellent estimator of an unknown subspace. The analysis approach developed is directly applicable to many other estimation problems on manifolds encountered in signal processing and elsewhere, such as estimating rotation matrices in computer vision a- nd estimating subspace basis vectors in blind source separation.