Trust and multi-agent systems: applying the "diffuse, default model" of trust to experiments involving artificial agents

  • Authors:
  • Jeff Buechner;Herman T. Tavani

  • Affiliations:
  • Department of Philosophy, Rutgers University, Newark, USA and Saul Kripke Center, City University of New York-The Graduate Center, New York, USA;Department of Philosophy, Rivier College, Nashua, USA 03060

  • Venue:
  • Ethics and Information Technology
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We argue that the notion of trust, as it figures in an ethical context, can be illuminated by examining research in artificial intelligence on multi-agent systems in which commitment and trust are modeled. We begin with an analysis of a philosophical model of trust based on Richard Holton's interpretation of P. F. Strawson's writings on freedom and resentment, and we show why this account of trust is difficult to extend to artificial agents (AAs) as well as to other non-human entities. We then examine Margaret Urban Walker's notions of "default trust" and "default, diffuse trust" to see how these concepts can inform our analysis of trust in the context of AAs. In the final section, we show how ethicists can improve their understanding of important features in the trust relationship by examining data resulting from a classic experiment involving AAs.