Reasoning with the outcomes of plan execution in intentional agents

  • Authors:
  • Timothy William Cleaver;Abdul Sattar;Kewen Wang

  • Affiliations:
  • Institute for Integrated and Intelligent Systems (IIIS), Griffith University, Australia;Institute for Integrated and Intelligent Systems (IIIS), Griffith University, Australia;Institute for Integrated and Intelligent Systems (IIIS), Griffith University, Australia

  • Venue:
  • AI'05 Proceedings of the 18th Australian Joint conference on Advances in Artificial Intelligence
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Intentional agents must be aware of their success and failure to truly assess their own progress towards their intended goals. However, our analysis of intentional agent systems indicate that existing architectures are inadequate in this regard. Specifically, existing systems provide few, if any, mechanisms for monitoring for the failure of behavior. This inability to detect failure means that agents retain an unrealistically optimistic view of the success of their behaviors and the state of their environment. In this paper we extend the solution proposed in [1] in three ways. Firstly, we extend the formulation to handle cases in which an agent has conflicting evidence regarding the causation of the effects of a plan or action. We do this by identifying a number of policies that an agent may use in order to alleviate these conflicts. Secondly, we provide mechanisms by which the agent can utilize its failure handling routines to recover when failure is detected. Lastly, we lift the requirement that all the effects be realized simultaneously and allow for progressive satisfaction of effects. Like the original solution these extensions can be applied to existing BDI systems.