Ethics outside the box: empirical tools for an ethics of artificial agents

  • Authors:
  • Peter Danielson

  • Affiliations:
  • University of British Columbia

  • Venue:
  • Proceedings of the 16th International ACM Sigsoft symposium on Component-based software engineering
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Software introduces new kinds of agents: artificial software agents (ASA), including, for example, driverless trains and cars. To create these devices responsibility, engineers need an ethics of software agency. However, this pragmatic professional need for guidance and regulation conflicts with the weakness of moral science. We do not know much about how ethics informs interactions with artificial agents. Most importantly, we don't know how people will regard ASA as agents: their agents (strictly speaking) and also their competitive and cooperative partners. Naturally, we want to deal with these new problems with our old ethical tools, but this conservative strategy may not work, and if not, may lead to catastrophic failure to anticipate the emerging moral landscape. (Just ask the creators of genetically modified foods.) 1. This lecture will look at the box or frame of traditional ethics and some ways to use experimental data to get outside it. The lecture uses some quick and nasty clicker experiments to point us to disturbing evidence from recent cognitive moral psychology about the form and content of our ethical apparatus (Haidt 2012) and its universality (Mikhail 2007). Then we turn to some new evidence on the ethics of human-ASA interaction. We focus on three surprising features of human-ASA interaction that disturb received ethical paradigms: 1) Overactive deontology: the tendency to seek out a culprit to blame, even it it's the victim. 2) Utopian consequentialism: denying the constraints of acting in the imperfect real world by shifting to wishful perfectionism. 3) Embracing mechanical exploitation: accepting worse behavior from a program than one would accept from a person in Ultimatum Game experiments. 2. Next, we show how an experimental, cognitive and game theoretic approach to ethics can situate and explain these problems. We play some games based on policy decisions for the emerging technology of driverless cars that remind us of the strategic dimension of ethics. We also examine weak experimental evidence that engineers think differently about ethics and technology from other moral tribes or types. 3. However we argue that theory cannot solve our ethical problems. Neither ethical nor game theory has resources powerful enough to discover and hopefully to bridge our moralized divisions. For these formidable, scientific and political (respectively) tasks we need new empirical methods. We offer two examples from our current research program: 1) Anonymous input of moral and value data: clickers for face-to-face interaction. 2) Democratic scale deliberation: N-Reasons web based experimental prototype. Both of these methods challenge our research ethics, which experimental ethics shares with experimental software engineering. As some of the data discussed in the lecture comes from the Robot Ethics survey, you will be better informed and represented if you visit http://your-views.org/D7/Robot_Ethics_Welcome. The "class" for the conference is "CompArch".