Inconsistency as a diagnostic tool in a society of intelligent agents

  • Authors:
  • Marjorie Mcshane;Stephen Beale;Sergei Nirenburg;Bruce Jarrell;George Fantry

  • Affiliations:
  • Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, Baltimore, MD 21250, USA;Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, Baltimore, MD 21250, USA;Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, Baltimore, MD 21250, USA;University of Maryland School of Medicine, Baltimore, MD 21201, USA;University of Maryland School of Medicine, Baltimore, MD 21201, USA

  • Venue:
  • Artificial Intelligence in Medicine
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Objective: To use the detection of clinically relevant inconsistencies to support the reasoning capabilities of intelligent agents acting as physicians and tutors in the realm of clinical medicine. Methods: We are developing a cognitive architecture, OntoAgent, that supports the creation and deployment of intelligent agents capable of simulating human-like abilities. The agents, which have a simulated mind and, if applicable, a simulated body, are intended to operate as members of multi-agent teams featuring both artificial and human agents. The agent architecture and its underlying knowledge resources and processors are being developed in a sufficiently generic way to support a variety of applications. Results: We show how several types of inconsistency can be detected and leveraged by intelligent agents in the setting of clinical medicine. The types of inconsistencies discussed include: test results not supporting the doctor's hypothesis; the results of a treatment trial not supporting a clinical diagnosis; and information reported by the patient not being consistent with observations. We show the opportunities afforded by detecting each inconsistency, such as rethinking a hypothesis, reevaluating evidence, and motivating or teaching a patient. Conclusions: Inconsistency is not always the absence of the goal of consistency; rather, it can be a valuable trigger for further exploration in the realm of clinical medicine. The OntoAgent cognitive architecture, along with its extensive suite of knowledge resources an processors, is sufficient to support sophisticated agent functioning such as detecting clinically relevant inconsistencies and using them to benefit patient-centered medical training and practice.