Trust by design: information requirements for appropriate trust in automation

  • Authors:
  • Pierre P. Duez;Michael J. Zuliani;Greg A. Jamieson

  • Affiliations:
  • University of Toronto;IBM Canada Ltd.;University of Toronto

  • Venue:
  • CASCON '06 Proceedings of the 2006 conference of the Center for Advanced Studies on Collaborative research
  • Year:
  • 2006
  • Explaining robot actions

    HRI '12 Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction

Quantified Score

Hi-index 0.00

Visualization

Abstract

Trust has, since the early stages of IBM's Autonomic Computing (AC) initiative, been recognized as an important factor in the success of new autonomic features. If operators do not trust the new automated tools, they will not use them -- no matter how useful or efficient they might be. Despite this stated awareness of trust as a major contributing factor to successful operator adoption of AC functionality (e.g., [11]), no clear process of explicitly designing for operator trust has emerged. The purpose of our research is to develop such a process, to provide a theoretically grounded method for designing for appropriate trust in automation. We define "appropriate trust" as it is described in [6]. By this definition, there are two components to appropriate trust. The first is proper calibration of trust, meaning that the operator trusts the automation to the degree of its capability, without over-trust or distrust. The second component is resolution of trust: the operator must be sensitive to different or changing conditions (functional or temporal) that might affect the ability of the automation to achieve the operator's goals.In our research, we have drawn on the extensive review of trust literature by Lee and See [6], who investigated the concept of trust as published from multiple perspectives (e.g., organizational, psychological, and interpersonal). Lee and See have developed a model of trust in automation, based on their review of the literature, which describes the feedback loops that inform one's attitude of trust (or distrust) towards automation. Furthermore, Lee and See identify a continuum of attributional abstraction - information based on which an operator may attribute a sense of trust in an automated tool. Three categories along this continuum are defined: purpose-, process-, and performance-related information are described as being necessary to achieving appropriate trust.Although they provide these categories of information, Lee and See [6] do not provide a process by which the appropriate information might be identified for a given automated tool. We hypothesized that Work Domain Analysis (WDA; [12]) might serve to provide a clear and definite list. WDA is part of a multi-stage analytic framework, developed for the analysis of complex socio-technical systems. It is a constraint-based, formative analysis, which describes the realm of possible actions, rather than a single prescribed path. The WDA, we reasoned, could be adapted and applied to the problem of design for appropriate trust in automation.In this paper, we will introduce the model of trust in automation described by [6]. We will also introduce WDA. We will then describe how this analysis can be applied to the question of trust in automation. Finally, we will present a case study from new automation in the IBM® DB2® Version 9.1 for Linux®, UNIX®, and Windows® product (DB2 V9.1), in which we applied WDA to identify specific information requirements for appropriate trust in the Self-Tuning Memory Manager, and used these findings to impact documentation and logging for this new automated functionality.