The invisible future
Artificial Intelligence: Structures and Strategies for Complex Problem Solving
Artificial Intelligence: Structures and Strategies for Complex Problem Solving
Decision Support Systems - Special issue: Formal modeling and electronic commerce
Valuation of Trust in Open Networks
ESORICS '94 Proceedings of the Third European Symposium on Research in Computer Security
Formal Analysis of Models for the Dynamics of Trust Based on Experiences
MAAMAW '99 Proceedings of the 9th European Workshop on Modelling Autonomous Agents in a Multi-Agent World: MultiAgent System Engineering
Trust Dynamics: How Trust Is Influenced by Direct Experiences and by Trust Itself
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 2
FAST'05 Proceedings of the Third international conference on Formal Aspects in Security and Trust
Effects of Reliance Support on Team Performance by Advising and Adaptive Autonomy
WI-IAT '11 Proceedings of the 2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology - Volume 02
Modelling biased human trust dynamics
Web Intelligence and Agent Systems
Hi-index | 0.00 |
In order for personal assistant agents in an ambient intelligence context to provide good recommendations, or pro-actively support humans in task allocation, a good model of what the human prefers is essential. One aspect that can be considered to tailor this support to the preferences of humans is trust. This measurement of trust should incorporate the notion of relativeness since a personal assistant agent typically has a choice of advising substitutable options. In this paper such a model for relative trust is presented, whereby a number of parameters can be set that represent characteristics of a human.