Socially conscious decision-making
AGENTS '00 Proceedings of the fourth international conference on Autonomous agents
REGRET: reputation in gregarious societies
Proceedings of the fifth international conference on Autonomous agents
Belief Revision Process Based on Trust: Agents Evaluating Reputation of Information Sources
Proceedings of the workshop on Deception, Fraud, and Trust in Agent Societies held during the Autonomous Agents Conference: Trust in Cyber-societies, Integrating the Human and Artificial Perspectives
The Socio-cognitive Dynamics of Trust: Does Trust Create Trust?
Proceedings of the workshop on Deception, Fraud, and Trust in Agent Societies held during the Autonomous Agents Conference: Trust in Cyber-societies, Integrating the Human and Artificial Perspectives
Emergent properties of referral systems
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Learning trust strategies in reputation exchange networks
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Dynamically learning sources of trust information: experience vs. reputation
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Sequential decision making with untrustworthy service providers
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 2
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
In open multiagent systems, agents need to select whom to trust. Traditionally, this selection is done based on the models that are built for the agents in the system. After each interaction with others, agents update their models of others based on the outcome. However, building and maintaining accurate models is difficult especially before many interactions take place. Contrary to traditional modeling approaches, we propose to model the environment in terms of agents' actions and their effects, rather than building individual models for each agent. Based on the effects of its actions, each agent can modify its behavior appropriately. We evaluate our proposed approach in comparison to a traditional approach in the Agent Reputation and Trust (ART) Testbed simulation environment. The simulations compare the two approaches in terms of the accuracy of models, the effectiveness in finding trustworthy agents as well as the effort needed to build accurate models.