What is believed is what is explained (sometimes)

  • Authors:
  • Renwei Li;Luís Moniz Pereira

  • Affiliations:
  • Department of Computer Science, Universidade Nova de Lisboa, Monte da Caparica, Portugal;Department of Computer Science, Universidade Nova de Lisboa, Monte da Caparica, Portugal

  • Venue:
  • AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a formal and computational methodology for incorporation of new knowledge into knowledge bases about actions and changes. We employ Gelfond and Lifschitz' action description language A to describe domains of actions. The knowledge bases on domains of actions are defined and obtained by a new translation from domain descriptions in A into abductive normal logic programs, where a time dimension is incorporated. The knowledge bases are shown to be both sound and complete with respect to their domain descriptions. In particular, we propose a possible causes approach (PCA) to belief update based on the slogan: What is believed is what is explained. A possible cause of new knowledge consists of abduced occurrences of actions and value propositions about the initial state of the domain of actions, that would allow to derive the new knowledge. We show how to compute possible causes with abductive logic programming, and present some techniques to improve search efficiency. We use examples to compare our possible causes approach with Ginsberg's possible worlds approach (PWA) and Winslett's possible models approach (PMA).