Trust between humans and machines, and the design of decision aids
International Journal of Man-Machine Studies - Special Issue: Cognitive Engineering in Dynamic Worlds
Trust, self-confidence, and operators' adaptation to automation
International Journal of Human-Computer Studies
The role of trust in automation reliance
International Journal of Human-Computer Studies - Special issue: Trust and technology
International Journal of Human-Computer Studies - Special issue: Trust and technology
An Adaptive Recommendation Trust Model in Multiagent System
IAT '04 Proceedings of the IEEE/WIC/ACM International Conference on Intelligent Agent Technology
Trust Model for Open Ubiquitous Agent Systems
IAT '05 Proceedings of the IEEE/WIC/ACM International Conference on Intelligent Agent Technology
Trust in new decision aid systems
IHM '06 Proceedings of the 18th International Conferenceof the Association Francophone d'Interaction Homme-Machine
Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction
Potential measures for detecting trust changes
HRI '12 Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction
Robot confidence and trust alignment
Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction
Impact of robot failures and feedback on real-time trust
Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction
Familiarization to robot motion
Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction
Hi-index | 0.00 |
Prior work in human-autonomy interaction has focused on plant systems that operate in highly structured environments. In contrast, many human-robot interaction (HRI) tasks are dynamic and unstructured, occurring in the open world. It is our belief that methods developed for the measurement and modeling of trust in traditional automation need alteration in order to be useful for HRI. Therefore, it is important to characterize the factors in HRI that influence trust. This study focused on the influence of changing autonomy reliability. Participants experienced a set of challenging robot handling scenarios that forced autonomy use and kept them focused on autonomy performance. The counterbalanced experiment included scenarios with different low reliability windows so that we could examine how drops in reliability altered trust and use of autonomy. Drops in reliability were shown to affect trust, the frequency and timing of autonomy mode switching, as well as participants' self-assessments of performance. A regression analysis on a number of robot, personal, and scenario factors revealed that participants tie trust more strongly to their own actions rather than robot performance.