Limiting spectral distribution for a class of random matrices
Journal of Multivariate Analysis
Strong convergence of the empirical distribution of eigenvalues of large dimensional random matrices
Journal of Multivariate Analysis
Best approximation of the identity mapping: The case of variable finite memory
Journal of Approximation Theory
Optimal multilinear estimation of a random vector under constraints of causality and limited memory
Computational Statistics & Data Analysis
Sample covariance shrinkage for high dimensional dependent data
Journal of Multivariate Analysis
Structural equation modeling with near singular covariance matrices
Computational Statistics & Data Analysis
Unified eigen analysis on multivariate Gaussian based estimation of distribution algorithms
Information Sciences: an International Journal
Properties of the singular, inverse and generalized inverse partitioned Wishart distributions
Journal of Multivariate Analysis
Computational Statistics & Data Analysis
Shrinkage estimation in the frequency domain of multivariate time series
Journal of Multivariate Analysis
An improved shrinkage estimator to infer regulatory networks with Gaussian graphical models
Proceedings of the 2009 ACM symposium on Applied Computing
EvoBIO '09 Proceedings of the 7th European Conference on Evolutionary Computation, Machine Learning and Data Mining in Bioinformatics
Review of user parameter-free robust adaptive beamforming algorithms
Digital Signal Processing
Sparse Gaussian graphical models with unknown block structure
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
A robust hidden Markov Gauss mixture vector quantizer for a noisy source
IEEE Transactions on Image Processing
Testing the equality of several covariance matrices with fewer observations than the dimension
Journal of Multivariate Analysis
Covariance estimation in decomposable Gaussian graphical models
IEEE Transactions on Signal Processing
Autoregressive frequency detection using Regularized Least Squares
Journal of Multivariate Analysis
Hybrid approaches and dimensionality reduction for portfolio selection with cardinality constraints
IEEE Computational Intelligence Magazine
High Dimensional Inverse Covariance Matrix Estimation via Linear Programming
The Journal of Machine Learning Research
Shrinkage algorithms for MMSE covariance estimation
IEEE Transactions on Signal Processing
Outlier detection and robust covariance estimation using mathematical programming
Advances in Data Analysis and Classification
MICCAI'10 Proceedings of the 13th international conference on Medical image computing and computer-assisted intervention: Part I
Computational Statistics & Data Analysis
Shrinkage-based regularization tests for high-dimensional data with application to gene set analysis
Computational Statistics & Data Analysis
A method for outdoor skateboarding video games
Proceedings of the 7th International Conference on Advances in Computer Entertainment Technology
Regularized parameter estimation in high-dimensional gaussian mixture models
Neural Computation
A probabilistic framework to infer brain functional connectivity from anatomical connections
IPMI'11 Proceedings of the 22nd international conference on Information processing in medical imaging
EEGLAB, SIFT, NFT, BCILAB, and ERICA: new tools for advanced EEG processing
Computational Intelligence and Neuroscience - Special issue on academic software applications for electromagnetic brain mapping using MEG and EEG
Theoretical Analysis of Bayesian Matrix Factorization
The Journal of Machine Learning Research
Portfolio Selection Using Tikhonov Filtering to Estimate the Covariance Matrix
SIAM Journal on Financial Mathematics
Machine-Learning based co-adaptive calibration: a perspective to fight BCI illiteracy
HAIS'10 Proceedings of the 5th international conference on Hybrid Artificial Intelligence Systems - Volume Part I
On the estimation of dynamic conditional correlation models
Computational Statistics & Data Analysis
Machine-learning-based coadaptive calibration for brain-computer interfaces
Neural Computation
Weak conditions for shrinking multivariate nonparametric density estimators
Journal of Multivariate Analysis
Regularized continuous estimation of distribution algorithms
Applied Soft Computing
An iterative stochastic ensemble method for parameter estimation of subsurface flow models
Journal of Computational Physics
Hyperspectral anomaly detection: comparative evaluation in scenes with diverse complexity
Journal of Electrical and Computer Engineering - Special issue on Algorithms for Multispectral and Hyperspectral Image Analysis
Journal of Multivariate Analysis
User-centered design in brain-computer interfaces-A case study
Artificial Intelligence in Medicine
Covariance Matrix Estimation with Multi-Regularization Parameters based on MDL Principle
Neural Processing Letters
Model-based clustering of high-dimensional data: A review
Computational Statistics & Data Analysis
Journal of Multivariate Analysis
Robust common spatial filters with a maxmin approach
Neural Computation
Journal of Multivariate Analysis
Wireless Personal Communications: An International Journal
Hi-index | 0.01 |
Many applied problems require a covariance matrix estimator that is not only invertible, but also well-conditioned (that is, inverting it does not amplify estimation error). For large-dimensional covariance matrices, the usual estimator--the sample covariance matrix--is typically not well-conditioned and may not even be invertible. This paper introduces an estimator that is both well-conditioned and more accurate than the sample covariance matrix asymptotically. This estimator is distribution-free and has a simple explicit formula that is easy to compute and interpret. It is the asymptotically optimal convex linear combination of the sample covariance matrix with the identity matrix. Optimality is meant with respect to a quadratic loss function, asymptotically as the number of observations and the number of variables go to infinity together. Extensive Monte Carlo confirm that the asymptotic results tend to hold well in finite sample.