Robustness of reputation-based trust: boolean case
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
Reputation in Artificial Societies: Social Beliefs for Social Order
Reputation in Artificial Societies: Social Beliefs for Social Order
Supporting Trust in Virtual Communities
HICSS '00 Proceedings of the 33rd Hawaii International Conference on System Sciences-Volume 6 - Volume 6
The Description Logic Handbook
The Description Logic Handbook
Trust alignment: a sine qua non of open multi-agent systems
OTM'11 Proceedings of the 2011th Confederated international conference on On the move to meaningful internet systems - Volume Part I
Arguing about social evaluations: From theory to experimentation
International Journal of Approximate Reasoning
Hi-index | 0.00 |
Since electronic and open environments became a reality, computational models of trust and reputation have attracted increasing interest in the field of multi-agent systems (MAS). In virtual societies of human actors very well-known mechanisms are already used to control non normative agents, for instance, the eBay scoring system. In virtual societies of artificial and autonomous agents, the same necessity arises, and several computational trust and reputation models have appeared in literature to cover this necessity. Typically, these models provide evaluations of agents' performance in a specific context, taking into account direct experiences and third party information. This last source of information is the communication of agents' own opinions. When dealing with cognitive agents endowed with complex reasoning mechanisms, we would like that these opinions could be justified in a way such that the resulting information was more complete and reliable. In this paper we present LRep, a language based on an existing ontology of reputation that allows building justifications of communicated social evaluations.