Discovering Dependencies via Algorithmic Mutual Information: A Case Study in DNA Sequence Comparisons

  • Authors:
  • Aleksandar Milosavljević

  • Affiliations:
  • Genome Structure Group, Center for Mechanistic Biology and Biotechnology, Argonne National Laboratory, Argonne, Illinois 60439-4833. Current address: CuraGen Corporation, 322 East Main Stre ...

  • Venue:
  • Machine Learning - Special issue on applications in molecular biology
  • Year:
  • 1995

Quantified Score

Hi-index 0.00

Visualization

Abstract

Algorithmic mutual information is a central concept in algorithmic information theory and may be measured as the difference between independent and joint minimal encoding lengths of objects; it is also a central concept in Chaitin's fascinating mathematical definition of life. We explore applicability of algorithmic mutual information as a tool for discovering dependencies in biology. In order to determine significance of discovered dependencies, we extend the newly proposed algorithmic significance method. The main theorem of the extended method states that d bits of algorithmic mutual information imply dependency at the significance level 2−d+O(1). We apply a heuristic version of the method to one of the main problems in DNA and protein sequence comparisons: the problem of deciding whether observed similarity between sequences should be explained by their relatedness or by the mere presence of some shared internal structure, e.g., shared internal repetitive patterns. We take advantage of the fact that mutual information factors out sequence similarity that is due to shared internal structure and thus enables discovery of truly related sequences. In addition to providing a general framework for sequence comparisons, we also propose an efficient way to compare sequences based on their subword composition that does not require any a priori assumptions about k-tuple length.