Algorithmic Cross-Complexity and Relative Complexity

  • Authors:
  • Daniele Cerra;Mihai Datcu

  • Affiliations:
  • -;-

  • Venue:
  • DCC '09 Proceedings of the 2009 Data Compression Conference
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Information content and compression are tightly related concepts that can be addressed by classical and algorithmic information theory. Several entities in the latter have been defined relying upon notions of the former, such as entropy and mutual information, since the basic concepts of these two approaches present many common tracts. In this work we further expand this parallelism by defining the algorithmic versions of cross-entropy and relative entropy (or Kullback-Leiblerdivergence), two well-known concepts in classical information theory. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when using such a description for x, with respect to its shortest representation. Since the main drawback of these concepts is their uncomputability, a suitable approximation based on data compression is derived for both and applied to real data. This allows us to improve the results obtained by similar previous methods which were intuitively defined.