Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Nonmonotonic reasoning, preferential models and cumulative logics
Artificial Intelligence
Artificial Intelligence - Special issue on knowledge representation
What does a conditional knowledge base entail?
Artificial Intelligence
Qualitative reasoning with imprecise probabilities
Journal of Intelligent Information Systems - Special issue: fuzzy logic and uncertainty management in information systems
Nonmonotonic reasoning, conditional objects and possibility theory
Artificial Intelligence
Causality: models, reasoning, and inference
Causality: models, reasoning, and inference
Probabilistic Reasoning Under Coherence in System P
Annals of Mathematics and Artificial Intelligence
An Empirical Test of Patterns for Nonmonotonic Inference
Annals of Mathematics and Artificial Intelligence
Predicting causality ascriptions from background knowledge: model and experimental validation
International Journal of Approximate Reasoning
A Comparative Study of Six Formal Models of Causal Ascription
SUM '08 Proceedings of the 2nd international conference on Scalable Uncertainty Management
Transitive Observation-Based Causation, Saliency, and the Markov Condition
SUM '08 Proceedings of the 2nd international conference on Scalable Uncertainty Management
Background default knowledge and causality ascriptions
Proceedings of the 2006 conference on ECAI 2006: 17th European Conference on Artificial Intelligence August 29 -- September 1, 2006, Riva del Garda, Italy
Proceedings of the 2006 conference on ECAI 2006: 17th European Conference on Artificial Intelligence August 29 -- September 1, 2006, Riva del Garda, Italy
Ordinal and probabilistic representations of acceptance
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
If A caused B and B caused C, did A cause C? Although laypersons commonly perceive causality as being transitive, some philosophers have questioned this assumption, and models of causality in artificial intelligence are often agnostic with respect to transitivity. We consider two formal models of causation that differ in the way they represent uncertainty. The quantitative model uses a crude probabilistic definition, arguably the common core of more sophisticated quantitative definitions; the qualitative model uses a definition based on nonmonotonic consequence relations. Different sufficient conditions for the transitivity of causation are laid bare by the two models: The Markov condition on events for the quantitative model, and a so-called saliency condition (A is perceived as a typical cause of B) for the qualitative model. We explore the formal and empirical relations between these sufficient conditions, and between the underlying definitions of perceived causation. These connections shed light on the range of applicability of each model, contrasting commonsense causal reasoning (supposedly qualitative) and scientific causation (more naturally quantitative). These speculations are supported by a series of three behavioral experiments.