Convergence rate on a nonparametric estimator for the conditional mean
ISIT'09 Proceedings of the 2009 IEEE international conference on Symposium on Information Theory - Volume 1
Hi-index | 754.84 |
In designing a vector quantizer using a training sequence (TS), the training algorithm tries to find an empirically optimal quantizer that minimizes the selected distortion criteria using the sequence. In order to evaluate the performance of the trained quantizer, we can use the empirically minimized distortion that we obtain when designing the quantizer. Several upper bounds on the empirically minimized distortions are proposed with numerical results. The bound holds pointwise, i.e., for each distribution with finite second moment in a class. From the pointwise bounds, it is possible to derive the worst case bound, which is better than the current bounds for practical training ratio β, the ratio of the TS size to the codebook size. It is shown that the empirically minimized distortion underestimates the true minimum distortion by more than a factor of (1-1/m), where m is the sequence size. Furthermore, through an asymptotic analysis in the codebook size, a multiplication factor [1-(1-e-β)/β]≈(1-1/β) for an asymptotic bound is shown. Several asymptotic bounds in terms of the vector dimension and the type of source are also introduced.