The media equation: how people treat computers, television, and new media like real people and places
Player motivations: A psychological perspective
Computers in Entertainment (CIE) - SPECIAL ISSUE: Media Arts and Games (Part II)
A Virtual Peer for Investigating Social Influences on Children's Bicycling
VR '09 Proceedings of the 2009 IEEE Virtual Reality Conference
International Journal of Human-Computer Studies
Using artificial team members for team training in virtual environments
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
Are artificial team-mates scapegoats in computer games
Proceedings of the ACM 2011 conference on Computer supported cooperative work
Choosing human team-mates: perceived identity as a moderator of player preference and enjoyment
Proceedings of the 6th International Conference on Foundations of Digital Games
Protecting artificial team-mates: more seems like less
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the 24th Australian Computer-Human Interaction Conference
Hi-index | 0.00 |
Artificial agents are increasingly included in digital games, often taking on a role as a team-mate with human players. An interesting area of focus is the differences in player responses to teammates that are either controlled by another human or a computer. Although there has been research examining social dynamics of team-mates and even some recent research comparing the responses to computer and human team-mates examining blame, credit, enjoyment, and differences in physiological responses of arousal, there does not seem to have been any research looking specifically at the differences in responses to acts of risk-taking on behalf of a team-mate. In order to study this question, a quantitative study was conducted in which 40 participants played a realtime, goal-oriented, cooperative game. The game allows (but does not require) players to perform risky actions that benefit their teammates - specifically, player's can "draw gunfire" towards themselves (and away from their team-mates). During the study, all participants played the game twice: once with an AI team-mate and once with a "presumed" human team-mate (i.e., an AI team-mate that they believed was a human team-mate). Thus, the team-mate performance and behaviors were identical for both cases - and in both cases, the team-mate "drew gunfire" an equal amount of the time. The main finding reported here is that players are more likely to notice acts of risk-taking by a human team-mate than by an artificial team-mate.