Simulating belief systems of autonomous agents

  • Authors:
  • Hemant K. Bhargava;William C. Branley, Jr.

  • Affiliations:
  • Code AS-BH Naval Postgraduate School 555 Dyer Road, Room 214 Monterey, CA 93943-5000, USA;U.S. Army A.7. Centre ATTN: SAIS-A1, 107 ARMY PENTAGON, Washington, DC 20310-0107, USA

  • Venue:
  • Decision Support Systems
  • Year:
  • 1995

Quantified Score

Hi-index 0.00

Visualization

Abstract

Autonomous agents in computer simulations do not have the usual mechanisms to acquire information as do their human counterparts. In many such simulations, it is not desirable that the agent have access to complete and correct information about its environment. We examine how imperfection in available information may be simulated in the case of autonomous agents. We determine probabilistically what the agent may detect, through hypothetical sensors, in a given situation. These detections are combined with the agent's knowledge base to infer observations and beliefs. Inherent in this task is a degree of uncertainty in choosing the most appropriate observation or belief. We describe and compare two approaches - a numerical approach and one based on defeasible logic - for simulating an appropriate belief in light of conflicting detection values at a given point in time. We discuss the application of this technique to autonomous forces in combat simulation systems.