Inverse resolution as belief change

  • Authors:
  • Maurice Pagnucco;David Rajaratnam

  • Affiliations:
  • ARC Centre of Excel. for Autonomous Sys., School of Comp. Science and Engineering, The University of New South Wales, Sydney, NSW, Australia;School of Comp. Science and Engineering, The University of New South Wales, Sydney, NSW, Australia

  • Venue:
  • IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Belief change is concerned with modelling the way in which a rational reasoner maintains its beliefs as it acquires new information. Of particular interest is the way in which new beliefs are acquired and determined and old beliefs are retained or discarded. A parallel can be drawn to symbolic machine learning approaches where examples to be categorised are presented to the learning system and a theory is subsequently derived, usually over a number of iterations. It is therefore not surprising that the term 'theory revision' is used to describe this process [Ourston and Mooney, 1994]. Viewing a machine learning system as a rational reasoner allows us to begin seeing these seemingly disparate mechanisms in a similar light. In this paper we are concerned with characterising the well known inverse resolution operations [Muggleton, 1987; 1992] (and more recently, inverse entailment [Muggleton, 1995]) as AGM-style belief change operations. In particular, our account is based on the abductive expansion operation [Pagnucco et al., 1994; Pagnucco, 1996] and characterised by using the notion of epistemic entrenchment [Gärdenfors and Makinson, 1988] extended for this operation. This work provides a basis for reconciling work in symbolic machine learning and belief revision. Moreover, it allows machine learning techniques to be understood as forms of nonmonotonic reasoning.