On the Performance of Vector Quantizers Empirically Designed from Dependent Sources
DCC '98 Proceedings of the Conference on Data Compression
Empirical quantizer design in the presence of source noise or channel noise
IEEE Transactions on Information Theory
On the amount of statistical side information required for lossy data compression
IEEE Transactions on Information Theory
The minimax distortion redundancy in empirical quantizer design
IEEE Transactions on Information Theory
On the training distortion of vector quantizers
IEEE Transactions on Information Theory
Hi-index | 0.00 |
It is shown by earlier results that the minimax expected (test) distortion redundancy of empirical vector quantizers with three or more levels designed from n independent and identically distributed data points is at least $\Omega(1/\sqrt n)$ for the class of distributions on a bounded set. In this paper, a much simpler construction and proof for this are given with much better constants. There are similar bounds for the training distortion of the empirically optimal vector quantizer with three or more levels. These rates, however, do not hold for a one-level quantizer. Here the two-level quantizer case is clarified, showing that it already shares the behavior of the general case. Given that the minimax bounds are proved using a construction that involves discrete distributions, one suspects that for the class of distributions with uniformly bounded continuous densities, the expected distortion redundancy might decrease as $o(1/\sqrt n)$ uniformly. It is shown as well that this is not so, proving that the lower bound for the expected test distortion remains true for these subclasses.