Protecting artificial team-mates: more seems like less

  • Authors:
  • Tim Merritt;Kevin McGee

  • Affiliations:
  • National University of Singapore, Singapore, Singapore;National University of Singapore, Singapore, Singapore

  • Venue:
  • Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
  • Year:
  • 2012

Quantified Score

Hi-index 0.01

Visualization

Abstract

Previous research on conversational, competitive, and cooperative systems suggests that people respond differently to humans and AI agents in terms of perception and evaluation of observed team-mate behavior. However, there has not been research examining the relationship between participants' protective behavior toward human/AI team-mates and their beliefs about their behavior. A study was conducted in which 32 participants played two sessions of a cooperative game, once with a "presumed" human and once with an AI team-mate; players could "draw fire" from a common enemy by "yelling" at it. Overwhelmingly, players claimed they "drew fire" on behalf of the presumed human more than for the AI team-mate; logged data indicates the opposite. The main contribution of this paper is to provide evidence of the mismatch in player beliefs about their actions and actual behavior with humans or agents and provides possible explanations for the differences.