Nonmonotonic reasoning, preferential models and cumulative logics
Artificial Intelligence
Predicting causality ascriptions from background knowledge: model and experimental validation
International Journal of Approximate Reasoning
A Comparative Study of Six Formal Models of Causal Ascription
SUM '08 Proceedings of the 2nd international conference on Scalable Uncertainty Management
Making Sense of a Sequence of Events: A Psychologically Supported AI Implementation
SUM '09 Proceedings of the 3rd International Conference on Scalable Uncertainty Management
Hi-index | 0.00 |
Making sense is a goal-driven process that integrates perceptual input into a cohesive internal representation. For human agents, it plays a central role in causal ascription or responsibility assignment. In this paper, we outline a theory of making sense. Making sense is hypothesized to arise from the interaction between perceptual input and context-dependent knowledge that was activated in long-term memory. Accordingly, the model draws heavily on psychological findings related to memory processing. The psychological grounding is completed by knowledge about cognitive architecture and supplemented by the literature on the attribution of cause and responsibility. The evidence is then integrated in an artificial intelligence (AI) model of making sense. The formalism and the mechanisms are inspired from previous AI research on causal ascription. We present a detailed account of the computer implementation. The implementation makes clear how knowledge influences the process of making sense in agreement with the psychological assumptions underlying the formal model. © 2012 Wiley Periodicals, Inc. © 2012 Wiley Periodicals, Inc.