Rates of convergence in the source coding theorem, in empirical quantizer design, and in universal lossy source coding

  • Authors:
  • T. Linder;G. Lugosi;K. Zeger

  • Affiliations:
  • Dept. of Telecommun., Tech. Univ. Budapest;-;-

  • Venue:
  • IEEE Transactions on Information Theory
  • Year:
  • 2006

Quantified Score

Hi-index 754.84

Visualization

Abstract

Rate of convergence results are established for vector quantization. Convergence rates are given for an increasing vector dimension and/or an increasing training set size. In particular, the following results are shown for memoryless real-valued sources with bounded support at transmission rate R. (1) If a vector quantizer with fixed dimension k is designed to minimize the empirical mean-square error (MSE) with respect to m training vectors, then its MSE for the true source converges in expectation and almost surely to the minimum possible MSE as O(√(log m/m)). (2) The MSE of an optimal k-dimensional vector quantizer for the true source converges, as the dimension grows, to the distortion-rate function D(R) as O(√(log k/k)). (3) There exists a fixed-rate universal lossy source coding scheme whose per-letter MSE on a real-valued source samples converges in expectation and almost surely to the distortion-rate function D(R) as O((√(loglog n/log n)). (4) Consider a training set of n real-valued source samples blocked into vectors of dimension k, and a k-dimension vector quantizer designed to minimize the empirical MSE with respect to the m=[n/k] training vectors. Then the per-letter MSE of this quantizer for the true source converges in expectation and almost surely to the distortion-rate function D(R) as O(√(log log n/log n))), if one chooses k=[(1/R)(1-ε)log n] for any ε∈(0.1)