Do people hold a humanoid robot morally accountable for the harm it causes?

  • Authors:
  • Peter H. Kahn, Jr.;Takayuki Kanda;Hiroshi Ishiguro;Brian T. Gill;Jolina H. Ruckert;Solace Shen;Heather E. Gary;Aimee L. Reichert;Nathan G. Freier;Rachel L. Severson

  • Affiliations:
  • University of Washington, Seattle, WA, USA;ATR, Kyoto, Japan;Osaka University, Osaka, & ATR, Kyoto, Japan;Seattle Pacific University, Seattle, WA, USA;University of Washington, Seattle, WA, USA;University of Washington, Seattle, WA, USA;University of Washington, Seattle, WA, USA;University of Washington, Seattle, WA, USA;Microsoft, Redmond, WA, USA;Western Washington University, Bellingham, WA, USA

  • Venue:
  • HRI '12 Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Robots will increasingly take on roles in our social lives where they can cause humans harm. When robots do so, will people hold robots morally accountable? To investigate this question, 40 undergraduate students individually engaged in a 15-minute interaction with ATR's humanoid robot, Robovie. The interaction culminated in a situation where Robovie incorrectly assessed the participant's performance in a game, and prevented the participant from winning a $20 prize. Each participant was then interviewed in a 50-minute session. Results showed that all of the participants engaged socially with Robovie, and many of them conceptualized Robovie as having mental/emotional and social attributes. Sixty-five percent of the participants attributed some level of moral accountability to Robovie. Statistically, participants held Robovie less accountable than they would a human, but more accountable than they would a vending machine. Results are discussed in terms of the New Ontological Category Hypothesis and robotic warfare.