Multiagent systems: a modern approach to distributed artificial intelligence
Multiagent systems: a modern approach to distributed artificial intelligence
Socially conscious decision-making
AGENTS '00 Proceedings of the fourth international conference on Autonomous agents
A fuzzy model of reputation in multi-agent systems
Proceedings of the fifth international conference on Autonomous agents
REGRET: reputation in gregarious societies
Proceedings of the fifth international conference on Autonomous agents
Social trust: a cognitive approach
Trust and deception in virtual societies
Risk and expectations in a-priori time allocation in multi-agent contracting
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
Notions of reputation in multi-agents systems: a review
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
An evidential model of distributed reputation management
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
The eager bidder problem: a fundamental problem of DAI and selected solutions
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2
Introduction to Multiagent Systems
Introduction to Multiagent Systems
Rational Coordination in Multi-Agent Environments
Autonomous Agents and Multi-Agent Systems
Rational Communication in Multi-Agent Environments
Autonomous Agents and Multi-Agent Systems
A Computational Model of Trust and Reputation for E-businesses
HICSS '02 Proceedings of the 35th Annual Hawaii International Conference on System Sciences (HICSS'02)-Volume 7 - Volume 7
Principles of Trust for MAS: Cognitive Anatomy, Social Importance, and Quantification
ICMAS '98 Proceedings of the 3rd International Conference on Multi Agent Systems
Coalition formation through motivation and trust
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Detecting deception in reputation management
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Trust in information sources as a source for trust: a fuzzy approach
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Rational communication in multi-agent semi-competitive environments
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Artificial Intelligence - Special issue: Fuzzy set and possibility theory-based methods in artificial intelligence
A Fuzzy-Logic Based Bidding Strategy for Autonomous Agents in Continuous Double Auctions
IEEE Transactions on Knowledge and Data Engineering
An Adaptive Strategy for Trust/Honesty Model in Multi-Agent Semi-Competitive Environments
ICTAI '04 Proceedings of the 16th IEEE International Conference on Tools with Artificial Intelligence
The Knowledge Engineering Review
PsychSim: modeling theory of mind with decision-theoretic agents
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Existence of Risk Strategy Equilibrium in Games Having No Pure Strategy Nash Equilibrium
Agent Computing and Multi-Agent Systems
PRIMA'06 Proceedings of the 9th Pacific Rim international conference on Agent Computing and Multi-Agent Systems
Hi-index | 0.00 |
In multiagent semi-competitive environments, competitions and cooperations can both exist. As agents compete with each other, they have incentives to lie. Sometimes, agents can increase their utilities by cooperating with each other, then they have incentives to tell the truth. Therefore, being a receiver, an agent needs to decide whether or not to trust the received message(s). To help agents make this decision, some of the existing models make use of trust or reputation only, which means agents choose to believe (or cooperate with) the trustworthy senders or senders with high reputation. However, a trustworthy agent may only bring little benefit. Another way to make the decision is to use expected utility. However, agents who only believe messages with high expected utilities can be cheated easily. To solve the problems, this paper introduces the Trust Model, which makes use of trust, expected utility, and also agents' attitudes towards risk to make decisions. On the other hand, being a sender, an agent needs to decide whether or not to be honest. To help agents make this decision, this paper introduces the Honesty Model, which is symmetric to the Trust Model. In addition, we introduce an adaptive strategy to the Trust/Honesty Model, which enables agents to learn from and adapt to the environment. Simulations show that agents with the Adaptive Trust/Honesty Model perform much better than agents which only use trust or expected utility to make the decision