Quantization-based probabilistic feature modeling for kernel design in content-based image retrieval
MIR '06 Proceedings of the 8th ACM international workshop on Multimedia information retrieval
The ultimate steganalysis benchmark?
Proceedings of the 9th workshop on Multimedia & security
Universal Estimation of Information Measures for Analog Sources
Foundations and Trends in Communications and Information Theory
Divergence estimation for multidimensional densities via k-nearest-neighbor distances
IEEE Transactions on Information Theory
Mutual information approximation via maximum likelihood estimation of density ratio
ISIT'09 Proceedings of the 2009 IEEE international conference on Symposium on Information Theory - Volume 1
Histogram-based estimation for the divergence revisited
ISIT'09 Proceedings of the 2009 IEEE international conference on Symposium on Information Theory - Volume 1
ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
Anthropic correction of information estimates and its application to neural coding
IEEE Transactions on Information Theory - Special issue on information theory in molecular biology and neuroscience
Region merging techniques using information theory statistical measures
IEEE Transactions on Image Processing
Estimating divergence functionals and the likelihood ratio by convex risk minimization
IEEE Transactions on Information Theory
Neural Networks
Mapping data mining algorithms on a GPU architecture: a study
ISMIS'11 Proceedings of the 19th international conference on Foundations of intelligent systems
Neural Computation
Hi-index | 754.96 |
We present a universal estimator of the divergence$D(P,Vert,Q)$for two arbitrary continuous distributions$P$and$Q$satisfying certain regularity conditions. This algorithm, which observes independent and identically distributed (i.i.d.) samples from both$P$and$Q$, is based on the estimation of the Radon–Nikodym derivative$ d Pover d Q$via a data-dependent partition of the observation space. Strong convergence of this estimator is proved with an empirically equivalent segmentation of the space. This basic estimator is further improved by adaptive partitioning schemes and by bias correction. The application of the algorithms to data with memory is also investigated. In the simulations, we compare our estimators with the direct plug-in estimator and estimators based on other partitioning approaches. Experimental results show that our methods achieve the best convergence performance in most of the tested cases.