Noisy data and impulse response estimation
IEEE Transactions on Signal Processing
Aggregation by exponential weighting and sharp oracle inequalities
COLT'07 Proceedings of the 20th annual conference on Learning theory
Suboptimality of penalized empirical risk minimization in classification
COLT'07 Proceedings of the 20th annual conference on Learning theory
Structured pursuits for geometric super-resolution
ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
Frequentist Model Averaging with missing observations
Computational Statistics & Data Analysis
Super-resolution with sparse mixing estimators
IEEE Transactions on Image Processing
Hyper-Sparse Optimal Aggregation
The Journal of Machine Learning Research
Competing against the best nearest neighbor filter in regression
ALT'11 Proceedings of the 22nd international conference on Algorithmic learning theory
A Bias-Variance Approach for the Nonlocal Means
SIAM Journal on Imaging Sciences
Non-local Methods with Shape-Adaptive Patches (NLM-SAP)
Journal of Mathematical Imaging and Vision
Sparse regression learning by aggregation and Langevin Monte-Carlo
Journal of Computer and System Sciences
Exponential weighting and oracle inequalities for projection estimates
Problems of Information Transmission
Factor model averaging quantile regression and simulation study
ICICA'12 Proceedings of the Third international conference on Information Computing and Applications
Hi-index | 754.84 |
For Gaussian regression, we develop and analyze methods for combining estimators from various models. For squared-error loss, an unbiased estimator of the risk of the mixture of general estimators is developed. Special attention is given to the case that the component estimators are least-squares projections into arbitrary linear subspaces, such as those spanned by subsets of explanatory variables in a given design. We relate the unbiased estimate of the risk of the mixture estimator to estimates of the risks achieved by the components. This results in simple and accurate bounds on the risk and its estimate, in the form of sharp and exact oracle inequalities. That is, without advance knowledge of which model is best, the resulting performance is comparable to or perhaps even superior to what is achieved by the best of the individual models. Furthermore, in the case that the unknown parameter has a sparse representation, our mixture estimator adapts to the underlying sparsity. Simulations show that the performance of these mixture estimators is better than that of a related model-selection estimator which picks a model with the highest weight. Also, the connection between our mixtures with Bayes procedures is discussed