Trust between humans and machines, and the design of decision aids
International Journal of Man-Machine Studies - Special Issue: Cognitive Engineering in Dynamic Worlds
Cognitive systems engineering
Trust, self-confidence, and operators' adaptation to automation
International Journal of Human-Computer Studies
Modelling self-confidence in users of a computer-based system showing unrepresentative design
International Journal of Human-Computer Studies
The role of trust in automation reliance
International Journal of Human-Computer Studies - Special issue: Trust and technology
A Model for Investigating the Effects of Machine Autonomy on Human Behavior
HICSS '04 Proceedings of the Proceedings of the 37th Annual Hawaii International Conference on System Sciences (HICSS'04) - Track 5 - Volume 5
Trust and etiquette in high-criticality automated systems
Communications of the ACM - Human-computer etiquette
A model for types and levels of human interaction with automation
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Proceedings of the 20th International Conference of the Association Francophone d'Interaction Homme-Machine
A methodology for designing information security feedback based on User Interface Patterns
Advances in Engineering Software
The applicability of human-centred automation guidelines in the fighter aircraft domain
Proceedings of the 29th Annual European Conference on Cognitive Ergonomics
Effects of changing reliability on trust of robot systems
HRI '12 Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction
Hi-index | 0.00 |
One of the main challenges to face concerning the safe utilization of new technologies in complex systems concerns the level of trust the operators have in the system. Danger exists when the operators have a low level of trust in it, as well as it also exists when they overtrust the system. This paper presents an extensive review of theoretical, empirical, and experimental studies on trust in systems. Its goal is to help system designers by proposing a set of design rules and guidelines on how to support appropriate trust tuning in new decision aid systems.