Evaluation of safety-critical software
Communications of the ACM
A Study of Synthetic Creativity: Behavior Modeling and Simulation of an Ant Colony
IEEE Intelligent Systems
Emergent Algorithms - A New Method for Enhancing Survivability in Unbounded Systems
HICSS '99 Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 7 - Volume 7
On the Morality of Artificial Agents
Minds and Machines
Explanation Exploration: Exploring Emergent Behavior
Proceedings of the 21st International Workshop on Principles of Advanced and Distributed Simulation
Floridi's Philosophy of Information and Information Ethics: Current Perspectives, Future Directions
The Information Society - The Philosophy of Information, its Nature, and Future Developments
Expanding ethical vistas of IT professionals
Information Systems Frontiers
Developing artificial agents worthy of trust: "Would you buy a used car from this artificial agent?"
Ethics and Information Technology
Ethics and Information Technology
Moral responsibility for computing artifacts: "the rules" and issues of trust
ACM SIGCAS Computers and Society
Cracking down on autonomy: three challenges to design in IT Law
Ethics and Information Technology
Negotiating autonomy and responsibility in military robots
Ethics and Information Technology
Hi-index | 0.01 |
In their important paper "Autonomous Agents", Floridi and Sanders use "levels of abstraction" to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question is, "Can an artificial agent that changes its own programming become so autonomous that the original designer is no longer responsible for the behavior of the artificial agent?" To explore this question, we distinguish between LoA1 (the user view) and LoA2 (the designer view) by exploring the concepts of unmodifiable, modifiable and fully modifiable tables that control artificial agents. We demonstrate that an agent with an unmodifiable table, when viewed at LoA2, distinguishes an artificial agent from a human one. This distinction supports our first counter-claim to Floridi and Sanders, namely, that such an agent is not a moral agent, and the designer bears full responsibility for its behavior. We also demonstrate that even if there is an artificial agent with a fully modifiable table capable of learning* and intentionality* that meets the conditions set by Floridi and Sanders for ascribing moral agency to an artificial agent, the designer retains strong moral responsibility.