Did you notice? artificial team-mates take risks for players

  • Authors:
  • Tim Merritt;Christopher Ong;Teong Leong Chuah;Kevin McGee

  • Affiliations:
  • NUS Graduate School for Integrative Sciences and Engineering;National University of Singapore;National University of Singapore;NUS Graduate School for Integrative Sciences and Engineering

  • Venue:
  • IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Artificial agents are increasingly included in digital games, often taking on a role as a team-mate with human players. An interesting area of focus is the differences in player responses to teammates that are either controlled by another human or a computer. Although there has been research examining social dynamics of team-mates and even some recent research comparing the responses to computer and human team-mates examining blame, credit, enjoyment, and differences in physiological responses of arousal, there does not seem to have been any research looking specifically at the differences in responses to acts of risk-taking on behalf of a team-mate. In order to study this question, a quantitative study was conducted in which 40 participants played a realtime, goal-oriented, cooperative game. The game allows (but does not require) players to perform risky actions that benefit their teammates - specifically, player's can "draw gunfire" towards themselves (and away from their team-mates). During the study, all participants played the game twice: once with an AI team-mate and once with a "presumed" human team-mate (i.e., an AI team-mate that they believed was a human team-mate). Thus, the team-mate performance and behaviors were identical for both cases - and in both cases, the team-mate "drew gunfire" an equal amount of the time. The main finding reported here is that players are more likely to notice acts of risk-taking by a human team-mate than by an artificial team-mate.