Looking for lumps: boosting and bagging for density estimation
Computational Statistics & Data Analysis - Nonlinear methods and data mining
On the size of convex hulls of small sets
The Journal of Machine Learning Research
Density estimation by the penalized combinatorial method
Journal of Multivariate Analysis
Efficient agnostic learning of neural networks with bounded fan-in
IEEE Transactions on Information Theory - Part 2
Sequential greedy approximation for certain convex optimization problems
IEEE Transactions on Information Theory
Combining regular and irregular histograms by penalized likelihood
Computational Statistics & Data Analysis
Posterior Expectation of Regularly Paved Random Histograms
ACM Transactions on Modeling and Computer Simulation (TOMACS) - Special Issue on Monte Carlo Methods in Statistics
Density estimation with minimization of U-divergence
Machine Learning
Hi-index | 0.00 |
We consider multivariate density estimation with identically distributed observations. We study a density estimator which is a convex combination of functions in a dictionary and the convex combination is chosen by minimizing the L 2 empirical risk in a stagewise manner. We derive the convergence rates of the estimator when the estimated density belongs to the L 2 closure of the convex hull of a class of functions which satisfies entropy conditions. The L 2 closure of a convex hull is a large non-parametric class but under suitable entropy conditions the convergence rates of the estimator do not depend on the dimension, and density estimation is feasible also in high dimensional cases. The variance of the estimator does not increase when the number of components of the estimator increases. Instead, we control the bias-variance trade-off by the choice of the dictionary from which the components are chosen.