Telerobotics, automation, and human supervisory control
Telerobotics, automation, and human supervisory control
“It's the computer's fault”: reasoning about computers as moral agents
CHI '95 Conference Companion on Human Factors in Computing Systems
Reasoning about computers as moral agents: a research note
Human values and the design of computer technology
Are computers scapegoats?: attributions of responsibility in human-computer interaction
International Journal of Human-Computer Studies
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Is it an Agent, or Just a Program?: A Taxonomy for Autonomous Agents
ECAI '96 Proceedings of the Workshop on Intelligent Agents III, Agent Theories, Architectures, and Languages
Responsible Management of Information Systems
Responsible Management of Information Systems
The responsibility gap: Ascribing responsibility for the actions of learning automata
Ethics and Information Technology
IEEE Intelligent Systems
Computer systems: Moral entities but not moral agents
Ethics and Information Technology
Whose job is it anyway? a study of human-robot interaction in a collaborative task
Human-Computer Interaction
Sharing Moral Responsibility with Robots: A Pragmatic Approach
Proceedings of the 2008 conference on Tenth Scandinavian Conference on Artificial Intelligence: SCAI 2008
Governing Lethal Behavior in Autonomous Robots
Governing Lethal Behavior in Autonomous Robots
Moral Responsibility for Computing Artifacts: "The Rules"
IT Professional
A model for types and levels of human interaction with automation
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Negotiating autonomy and responsibility in military robots
Ethics and Information Technology
Hi-index | 0.00 |
This article discusses mechanisms and principles for assignment of moral responsibility to intelligent robots, with special focus on military robots. We introduce the concept autonomous power as a new concept, and use it to identify the type of robots that call for moral considerations. It is furthermore argued that autonomous power, and in particular the ability to learn, is decisive for assignment of moral responsibility to robots. As technological development will lead to robots with increasing autonomous power, we should be prepared for a future when people blame robots for their actions. It is important to, already today, investigate the mechanisms that control human behavior in this respect. The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots. Independent of the responsibility issue, the moral quality of robots' behavior should be seen as one of many performance measures by which we evaluate robots. How to design ethics based control systems should be carefully investigated already now. From a consequentialist view, it would indeed be highly immoral to develop robots capable of performing acts involving life and death, without including some kind of moral framework.