Causal Inference on Discrete Data Using Additive Noise Models

  • Authors:
  • Jonas Peters;Dominik Janzing;Bernhard Scholkopf

  • Affiliations:
  • Max Planck Institute for Biological Cybernetics, Tübingen;Max Planck Institute for Biological Cybernetics, Tübingen;Max Planck Institute for Biological Cybernetics, Tübingen

  • Venue:
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Year:
  • 2011

Quantified Score

Hi-index 0.14

Visualization

Abstract

Inferring the causal structure of a set of random variables from a finite sample of the joint distribution is an important problem in science. The case of two random variables is particularly challenging since no (conditional) independences can be exploited. Recent methods that are based on additive noise models suggest the following principle: Whenever the joint distribution {\bf P}^{(X,Y)} admits such a model in one direction, e.g., Y=f(X)+N, N \perp\kern-6pt \perp X, but does not admit the reversed model X=g(Y)+\tilde{N}, \tilde{N} \perp\kern-6pt \perp Y, one infers the former direction to be causal (i.e., X\rightarrow Y). Up to now, these approaches only dealt with continuous variables. In many situations, however, the variables of interest are discrete or even have only finitely many states. In this work, we extend the notion of additive noise models to these cases. We prove that it almost never occurs that additive noise models can be fit in both directions. We further propose an efficient algorithm that is able to perform this way of causal inference on finite samples of discrete variables. We show that the algorithm works on both synthetic and real data sets.