Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Causality: models, reasoning, and inference
Causality: models, reasoning, and inference
Learning Mixtures of Gaussians
FOCS '99 Proceedings of the 40th Annual Symposium on Foundations of Computer Science
Learning mixtures of product distributions over discrete domains
FOCS '05 Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science
A Linear Non-Gaussian Acyclic Model for Causal Discovery
The Journal of Machine Learning Research
A kernel-based causal learning algorithm
Proceedings of the 24th international conference on Machine learning
Kernel methods and the exponential family
Neurocomputing
Unifying divergence minimization and statistical inference via convex duality
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Mercer’s theorem, feature maps, and smoothing
COLT'06 Proceedings of the 19th annual conference on Learning Theory
On the identifiability of the post-nonlinear causal model
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
Causal inference using the algorithmic Markov condition
IEEE Transactions on Information Theory
Hi-index | 0.07 |
We propose a method to quantify the complexity of conditional probability measures by a Hilbert space seminorm of the logarithm of its density. The concept of reproducing kernel Hilbert spaces (RKHSs) is a flexible tool to define such a seminorm by choosing an appropriate kernel. We present several examples with artificial data sets where our kernel-based complexity measure is consistent with our intuitive understanding of complexity of densities. The intention behind the complexity measure is to provide a new approach to inferring causal directions. The idea is that the factorization of the joint probability measure P(effect,cause) into P(effect|cause)P(cause) leads typically to ''simpler'' and ''smoother'' terms than the factorization into P(cause|effect)P(effect). Since the conventional constraint-based approach of causal discovery is not able to determine the causal direction between only two variables, our inference principle can in particular be useful when combined with other existing methods. We provide several simple examples with real-world data where the true causal directions indeed lead to simpler (conditional) densities.