Comparing Notions of Computational Entropy

  • Authors:
  • Alexandre Pinto

  • Affiliations:
  • DCC-FC & LIACC, R. Campo Alegre 1021/1055, 4169-007 Porto,

  • Venue:
  • CiE '07 Proceedings of the 3rd conference on Computability in Europe: Computation and Logic in the Real World
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the information theoretic world, entropy is both the measure of randomness in a source and a lower bound for the compression achievable for that source by any encoding scheme. But when we must restrict ourselves to efficient schemes, entropy no longer captures these notions well. For example, there are distributions with very low entropy that nonetheless look random for polynomial-bound algorithms.Different notions of computational entropy have been proposed to take the role of entropy in such settings. Results in [GS91] and [Wee04]) suggest that when time bounds are introduced, the entropy of a distribution no longer coincides with the most effective compression for that source.This paper analyses three measures that try to capture the compressibility of a source, establishing relations and separations between them and analysing the two special cases of the uniform and the universal distribution mtover binary strings of a fixed size. It is shown that for the uniform distribution the three measures are equivalent and that for mtthere is a clear separation between metric type entropy, and thus pseudo-entropy, and the maximum compressibility of a source.