Does automation bias decision-making?
International Journal of Human-Computer Studies
Social trust: a cognitive approach
Trust and deception in virtual societies
A versatile approach to combining trust values for making binary decisions
iTrust'06 Proceedings of the 4th international conference on Trust Management
Effects of Reliance Support on Team Performance by Advising and Adaptive Autonomy
WI-IAT '11 Proceedings of the 2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology - Volume 02
Design and validation of a relative trust model
Knowledge-Based Systems
Modelling biased human trust dynamics
Web Intelligence and Agent Systems
Hi-index | 0.00 |
This paper involves a human-agent system in which there is an operator charged with a pattern recognition task, using an automated decision aid. The objective is to make this human-agent system operate as effectively as possible. Effectiveness is gained by an increase of appropriate reliance on the operator and the aid. We studied whether it is possible to contribute to this objective by, apart from the operator, letting the aid as well calibrate trust in order to make reliance decisions. In addition, the aid's calibration of trust in reliance decision making capabilities of both the operator and itself is also expected to contribute, through reliance decision making on a metalevel, which we call metareliance decision making. In this paper we present a formalization of these two approaches: a reliance (RDMM) and metareliance decision making model (MetaRDMM), respectively. A combination of laboratory and simulation experiments shows significant improvements compared to reliance decision making solely done by operators.