Computational information theory
Complexity in information theory
SIAM Journal on Computing
An introduction to Kolmogorov complexity and its applications (2nd ed.)
An introduction to Kolmogorov complexity and its applications (2nd ed.)
A Pseudorandom Generator from any One-way Function
SIAM Journal on Computing
On the Length of Programs for Computing Finite Binary Sequences
Journal of the ACM (JACM)
Journal of Logic, Language and Information
On Pseudoentropy versus Compressibility
CCC '04 Proceedings of the 19th IEEE Annual Conference on Computational Complexity
Hi-index | 0.00 |
In the information theoretic world, entropy is both the measure of randomness in a source and a lower bound for the compression achievable for that source by any encoding scheme. But when we must restrict ourselves to efficient schemes, entropy no longer captures these notions well. For example, there are distributions with very low entropy that nonetheless look random for polynomial-bound algorithms.Different notions of computational entropy have been proposed to take the role of entropy in such settings. Results in [GS91] and [Wee04]) suggest that when time bounds are introduced, the entropy of a distribution no longer coincides with the most effective compression for that source.This paper analyses three measures that try to capture the compressibility of a source, establishing relations and separations between them and analysing the two special cases of the uniform and the universal distribution mtover binary strings of a fixed size. It is shown that for the uniform distribution the three measures are equivalent and that for mtthere is a clear separation between metric type entropy, and thus pseudo-entropy, and the maximum compressibility of a source.