SIAM Review
Convex Optimization
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Toeplitz And Circulant Matrices: A Review (Foundations and Trends(R) in Communications and Information Theory)
The Journal of Machine Learning Research
Robust Distributed Estimation Using the Embedded Subgraphs Algorithm
IEEE Transactions on Signal Processing
Minimax MSE-ratio estimation with signal covariance uncertainties
IEEE Transactions on Signal Processing
A competitive minimax approach to robust estimation of random parameters
IEEE Transactions on Signal Processing
Embedded trees: estimation of Gaussian Processes on graphs with cycles
IEEE Transactions on Signal Processing
IEEE Transactions on Signal Processing
IEEE Transactions on Information Theory
Hi-index | 35.68 |
We consider the estimation of a Gaussian random vector x observed through a linear transformation H and corrupted by additive Gaussian noise with a known covariance matrix, where the covariance matrix of x is known to lie in a given region of uncertainty that is described using bounds on the eigenvalues and on the elements of the covariance matrix. Recently, two criteria for minimax estimation called difference regret (DR) and ratio regret (RR) were proposed and their closed form solutions were presented assuming that the eigenvalues of the covariance matrix of x are known to lie in a given region of uncertainty, and assuming that the matrices HT Cw-1 H and Cx are jointly diagonalizable, where Cw and Cx denote the covariance matrices of the additive noise and of x respectively. In this work we present a new criterion for the minimax estimation problem which we call the generalized difference regret (GDR), and derive a new minimax estimator which is based on the GDR criterion where the region of uncertainty is defined not only using upper and lower bounds on the eigenvalues of the parameter's covariance matrix, but also using upper and lower bounds on the individual elements of the covariance matrix itself. Furthermore, the new estimator does not require the assumption of joint diagonalizability and it can be obtained efficiently using semidefinite programming. We also show that when the joint diagonalizability assumption holds and when there are only eigenvalue uncertainties, then the new estimator is identical to the difference regret estimator. The experimental results show that we can obtain improved mean squared error (MSE) results compared to the MMSE, DR, and RR estimators.