Notions of reputation in multi-agents systems: a review
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
Reputation in Artificial Societies: Social Beliefs for Social Order
Reputation in Artificial Societies: Social Beliefs for Social Order
A Logic for Characterizing Multiple Bounded Agents
Autonomous Agents and Multi-Agent Systems
Supporting Trust in Virtual Communities
HICSS '00 Proceedings of the 33rd Hawaii International Conference on System Sciences-Volume 6 - Volume 6
Reputation and endorsement for web services
ACM SIGecom Exchanges - Chains of commitment
Modeling Dialogues Using Argumentation
ICMAS '00 Proceedings of the Fourth International Conference on MultiAgent Systems (ICMAS-2000)
Empirical research in on-line trust: a review and critical assessment
International Journal of Human-Computer Studies - Special issue: Trust and technology
Two party immediate response disputes: properties and efficiency
Artificial Intelligence
Review on Computational Trust and Reputation Models
Artificial Intelligence Review
A survey of trust and reputation systems for online service provision
Decision Support Systems
ARES '07 Proceedings of the The Second International Conference on Availability, Reliability and Security
Securing Agent-Oriented Systems: An Argumentation and Reputation-based Approach
ITNG '07 Proceedings of the International Conference on Information Technology
A survey of trust in computer science and the Semantic Web
Web Semantics: Science, Services and Agents on the World Wide Web
An argumentation framework for merging conflicting knowledge bases
International Journal of Approximate Reasoning
On representation and aggregation of social evaluations in computational trust and reputation models
International Journal of Approximate Reasoning
Preference-based argumentation: Arguments supporting multiple values
International Journal of Approximate Reasoning
Arguing about Reputation: The LRep Language
Engineering Societies in the Agents World VIII
Computational Logic in Multi-Agent Systems
Inconsistency tolerance in weighted argument systems
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
Using probabilistic argumentation for key validation in public-key cryptography
International Journal of Approximate Reasoning
An Argumentation-Based Dialog for Social Evaluations Exchange
Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
Reputation-Based trust systems for p2p applications: design issues and comparison framework
TrustBus'06 Proceedings of the Third international conference on Trust, Privacy, and Security in Digital Business
Applying dialogue games to manage recommendation in social networks
ArgMAS'09 Proceedings of the 6th international conference on Argumentation in Multi-Agent Systems
A survey of trust in internet applications
IEEE Communications Surveys & Tutorials
Hi-index | 0.00 |
In open multiagent systems, agents depend on reputation and trust mechanisms to evaluate the behavior of potential partners. Often these evaluations are associated with a measure of reliability that the source agent computes. However, due to the subjectivity of reputation-related information, this can lead to serious problems when considering communicated social evaluations. In this paper, instead of considering only reliability measures computed from the sources, we provide a mechanism that allows the recipient decide whether the piece of information is reliable according to its own knowledge. We do it by allowing the agents engage in an argumentation-based dialog specifically designed for the exchange of social evaluations. We evaluate our framework through simulations. The results show that in most of the checked conditions, agents that use our dialog framework significantly improve (statistically) the accuracy of the evaluations, over the agents that do not use it. In particular, the simulations reveal that when there is a heterogeneity set of agents (not all the agents have the same goals) and agents base part of their inferences on third-party information, it is worth using our dialog protocol.