Deciding when to trust automation in a policy-based city management game: policity

  • Authors:
  • Kenya Freeman Oduor;Christopher S. Campbell

  • Affiliations:
  • IBM, Durham, North Carolina;IBM Almaden Research Center, San Jose, California

  • Venue:
  • Proceedings of the 2007 symposium on Computer human interaction for the management of information technology
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

As businesses and governments strive to improve productivity and deploy more elaborate IT systems, the need for complex systems management grows. Completely automated systems are not yet a reality, so the benefits that automation offers can only be achieved through collaboration with human operators. The question, however, is what factors influence decisions about how this human-computer relationship will be coordinated? A great deal of research has pointed to trust and perceived reliability as key factors in whether automation will be properly used, misused, or disused in systems management. To explore this question, we conducted an experiment in which an automated decision aid presented suggestions or policies to participants while they managed a simulated city (i.e., Policity). The goal was to maximize the health of the city's population by adding hospitals, housing, businesses and other facilities and services. Participants were randomly assigned to conditions where the decision aid performed with varying (i.e., high or low) reliability levels. Results showed that users' perception of the decision aid's reliability directly influenced their trust in the decision aid. Consequently, the relationship between users' perceived reliability and the decision aid's (actual) reliability had a direct effect on human performance. Population health suffered when the decision aid's suggestions were disused and misused compared to when they were appropriately used. Additional results and implications are discussed. ACM Classification