An on-line universal lossy data compression algorithm via continuous codebook refinement. II. Optimality for phi-mixing source models

  • Authors:
  • Zhen Zhang;En-hui Yang

  • Affiliations:
  • Dept. of Electr. Eng. Syst., Univ. of Southern California, Los Angeles, CA;-

  • Venue:
  • IEEE Transactions on Information Theory
  • Year:
  • 1996

Quantified Score

Hi-index 754.84

Visualization

Abstract

For pt.I see ibid., vol.42, no.3, p.803-21 (1996). Two versions of the gold-washing data compression algorithm, one with codebook innovation interval and the other with finitely many codebook innovations, are considered. The version of the gold-washing algorithm with codebook innovation interval k is a variant of the gold-washing algorithm such that the codebook is innovated once every k+1 source words during the process of encoding the entire source. It is demonstrated that when this version of the gold-washing algorithm is applied to encode a stationary, φ-mixing source, the expected distortion performance converges to the distortion rate function of the source as the codebook length goes to infinity. Furthermore, if the source to be encoded is a Markov source or a finite-state source, then the corresponding sample distortion performance converges almost surely to the distortion rate function. The version of the gold-washing algorithm with finitely many codebook innovations is a variant of the gold-washing algorithm in which after finitely many codebook innovations, the codebook is held fixed and reused to encode the forthcoming source sequence block by block. Similar results are shown for this version of the gold-washing algorithm. In addition, the convergence speed of the algorithm is discussed