Modeling Dialogues Using Argumentation
ICMAS '00 Proceedings of the Fourth International Conference on MultiAgent Systems (ICMAS-2000)
Two party immediate response disputes: properties and efficiency
Artificial Intelligence
Inconsistency tolerance in weighted argument systems
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
Weighted argument systems: Basic definitions, algorithms, and complexity results
Artificial Intelligence
Trust alignment: a sine qua non of open multi-agent systems
OTM'11 Proceedings of the 2011th Confederated international conference on On the move to meaningful internet systems - Volume Part I
Engineering trust alignment: Theory, method and experimentation
International Journal of Human-Computer Studies
Talking about trust in heterogeneous multi-agent systems
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Personalizing communication about trust
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Arguing about social evaluations: From theory to experimentation
International Journal of Approximate Reasoning
Hi-index | 0.02 |
In open multiagent systems, agents depend on reputation and trust mechanisms to evaluate the behavior of potential partners. Often these evaluations (social evaluate) are associated with a measure of reliability that the source agent computes. When considering communicated social evaluations, this may lead to serious problems due to the subjectivity of reputation-related information. In this paper, instead of considering only reliability measures computed from the sources, we provide a mechanism that allows the recipient according to its own knowledge, decide whether the piece of information is reliable. We do this by allowing the agents engage in an argumentation-based dialog