The rediscovery of the mind
“It's the computer's fault”: reasoning about computers as moral agents
CHI '95 Conference Companion on Human Factors in Computing Systems
Design patterns for sociality in human-robot interaction
Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction
Interactive robots as social partners and peer tutors for children: a field trial
Human-Computer Interaction
No fair!!: an interaction with a cheating robot
Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction
The new ontological category hypothesis in human-robot interaction
Proceedings of the 6th international conference on Human-robot interaction
The curious case of human-robot morality
Proceedings of the 6th international conference on Human-robot interaction
Do elderly people prefer a conversational humanoid as a shopping assistant partner in supermarkets?
Proceedings of the 6th international conference on Human-robot interaction
ICSR'12 Proceedings of the 4th international conference on Social Robotics
Eyewitnesses are misled by human but not robot interviewers
Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction
Designing for sociality in HRI by means of multiple personas in robots
Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction
No joking aside: using humor to establish sociality in HRI
Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction
Will humans mutually deliberate with social robots?
Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction
Hi-index | 0.00 |
Robots will increasingly take on roles in our social lives where they can cause humans harm. When robots do so, will people hold robots morally accountable? To investigate this question, 40 undergraduate students individually engaged in a 15-minute interaction with ATR's humanoid robot, Robovie. The interaction culminated in a situation where Robovie incorrectly assessed the participant's performance in a game, and prevented the participant from winning a $20 prize. Each participant was then interviewed in a 50-minute session. Results showed that all of the participants engaged socially with Robovie, and many of them conceptualized Robovie as having mental/emotional and social attributes. Sixty-five percent of the participants attributed some level of moral accountability to Robovie. Statistically, participants held Robovie less accountable than they would a human, but more accountable than they would a vending machine. Results are discussed in terms of the New Ontological Category Hypothesis and robotic warfare.