Logical and Relational Learning: From ILP to MRDM (Cognitive Technologies)
Logical and Relational Learning: From ILP to MRDM (Cognitive Technologies)
A Formalization of Trust Alignment
Proceedings of the 2009 conference on Artificial Intelligence Research and Development: Proceedings of the 12th International Conference of the Catalan Association for Artificial Intelligence
Handling subjective user feedback for reputation computation in virtual reality
UMAP'11 Proceedings of the 19th international conference on Advances in User Modeling
Engineering trust alignment: Theory, method and experimentation
International Journal of Human-Computer Studies
Talking about trust in heterogeneous multi-agent systems
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Personalizing communication about trust
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Hi-index | 0.00 |
In open multi-agent systems trust models are an important tool for agents to achieve effective interactions. However, the agents do not necessarily use similar trust models, leading to semantic differences between trust evaluations in the different agents. We show how to form a trust alignment by considering the interactions agents share. We describe a method, using inductive learning algorithms, to accomplish this alignment.