Mutual information-based context quantization

  • Authors:
  • Marco Cagnazzo;Marc Antonini;Michel Barlaud

  • Affiliations:
  • TELECOM-ParisTech, 46 rue Barrault, 75634 Paris, France;I3S Laboratory, 2000 route des Lucioles, 06903 Sophia Antipolis, France;I3S Laboratory, 2000 route des Lucioles, 06903 Sophia Antipolis, France

  • Venue:
  • Image Communication
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Context-based lossless coding suffers in many cases from the so-called context dilution problem, which arises when, in order to model high-order statistic dependencies among data, a large number of contexts is used. In this case the learning process cannot be fed with enough data, and so the probability estimation is not reliable. To avoid this problem, state-of-the-art algorithms for lossless image coding resort to context quantization (CQ) into a few conditioning states, whose statistics are easier to estimate in a reliable way. It has been early recognized that in order to achieve the best compression ratio, contexts have to be grouped according to a maximal mutual information criterion. This leads to quantization algorithms which are able to determine a local minimum of the coding cost in the general case, and even the global minimum in the case of binary-valued input. This paper surveys the CQ problem and provides a detailed analytical formulation of it, allowing to shed light on some details of the optimization process. As a consequence we find that state-of-the-art algorithms have a suboptimal step. The proposed approach allows a steeper path toward the cost function minimum. Moreover, some sufficient conditions are found that allow to find a globally optimal solution even when the input alphabet is not binary. Even though the paper mainly focuses on the theoretical aspects of CQ, a number of experiments to validate the proposed method have been performed (for the special case of segmentation map lossless coding), and encouraging results have been recorded.