Toward establishing trust in adaptive agents

  • Authors:
  • Alyssa Glass;Deborah L. McGuinness;Michael Wolverton

  • Affiliations:
  • Stanford University, Stanford, CA;Stanford University, Stanford, CA;SRI International, Menlo Park, CA

  • Venue:
  • Proceedings of the 13th international conference on Intelligent user interfaces
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

As adaptive agents become more complex and take increasing autonomy in their user's lives, it becomes more important for users to trust and understand these agents. Little work has been done, however, to study what factors influence the level of trust users are willing to place in these agents. Without trust in the actions and results produced by these agents, their use and adoption as trusted assistants and partners will be severely limited. We present the results of a study among test users of CALO, one such complex adaptive agent system, to investigate themes surrounding trust and understandability. We identify and discuss eight major themes that significantly impact user trust in complex systems. We further provide guidelines for the design of trustable adaptive agents. Based on our analysis of these results, we conclude that the availability of explanation capabilities in these agents can address the majority of trust concerns identified by users.