Efficient incremental induction of decision trees
Machine Learning
Using Model Trees for Classification
Machine Learning
Communication and Trust in Global Virtual Teams
Organization Science
Principles of Trust for MAS: Cognitive Anatomy, Social Importance, and Quantification
ICMAS '98 Proceedings of the 3rd International Conference on Multi Agent Systems
Detecting deception in reputation management
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Information Theory, Inference & Learning Algorithms
Information Theory, Inference & Learning Algorithms
Semantic constraints for trust transitivity
APCCM '05 Proceedings of the 2nd Asia-Pacific conference on Conceptual modelling - Volume 43
TRAVOS: Trust and Reputation in the Context of Inaccurate Information Sources
Autonomous Agents and Multi-Agent Systems
An integrated trust and reputation model for open multi-agent systems
Autonomous Agents and Multi-Agent Systems
Trust network analysis with subjective logic
ACSC '06 Proceedings of the 29th Australasian Computer Science Conference - Volume 48
A survey of trust and reputation systems for online service provision
Decision Support Systems
Communications of the ACM - Emergency response information systems: emerging trends and technologies
Cognition, Technology and Work
From Binary Trust to Graded Trust in Information Sources: A Logical Perspective
Trust in Agent Societies
Operators for propagating trust and their evaluation in social networks
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
Formal trust model for multiagent systems
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Bootstrapping trust evaluations through stereotypes
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Probabilistic argumentation frameworks
TAFA'11 Proceedings of the First international conference on Theory and Applications of Formal Argumentation
Hi-index | 0.00 |
Large-scale multiagent systems have the potential to be highly dynamic. Trust and reputation are crucial concepts in these environments, as it may be necessary for agents to rely on their peers to perform as expected, and learn to avoid untrustworthy partners. However, aspects of highly dynamic systems introduce issues which make the formation of trust relationships difficult. For example, they may be short-lived, precluding agents from gaining the necessary experiences to make an accurate trust evaluation. This article describes a new approach, inspired by theories of human organizational behavior, whereby agents generalize their experiences with previously encountered partners as stereotypes, based on the observable features of those partners and their behaviors. Subsequently, these stereotypes are applied when evaluating new and unknown partners. Furthermore, these stereotypical opinions can be communicated within the society, resulting in the notion of stereotypical reputation. We show how this approach can complement existing state-of-the-art trust models, and enhance the confidence in the evaluations that can be made about trustees when direct and reputational information is lacking or limited. Furthermore, we show how a stereotyping approach can help agents detect unwanted biases in the reputational opinions they receive from others in the society.