Vector quantization with complexity costs

  • Authors:
  • J. Buhmann;H. Kuhnel

  • Affiliations:
  • Inst. fuer Inf. II, Rheinische Friedrich-Wilhelms-Univ., Bonn;-

  • Venue:
  • IEEE Transactions on Information Theory
  • Year:
  • 2006

Quantified Score

Hi-index 754.84

Visualization

Abstract

Vector quantization is a data compression method by which a set of data points is encoded by a reduced set of reference vectors: the codebook. A vector quantization strategy is discussed that jointly optimizes distortion errors and the codebook complexity, thereby determining the size of the codebook. A maximum entropy estimation of the cost function yields an optimal number of reference vectors, their positions, and their assignment probabilities. The dependence of the codebook density on the data density for different complexity functions is investigated in the limit of asymptotic quantization levels. How different complexity measures influence the efficiency of vector quantizers is studied for the task of image compression. The wavelet coefficients of gray-level images are quantized, and the reconstruction error is measured. The approach establishes a unifying framework for different quantization methods like K-means clustering and its fuzzy version, entropy constrained vector quantization or topological feature maps, and competitive neural networks