Fuzzy sets, fuzzy logic, and fuzzy systems
Social trust: a cognitive approach
Trust and deception in virtual societies
Notions of reputation in multi-agents systems: a review
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
An evidential model of distributed reputation management
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
The evolution and stability of cooperative traits
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 3
Intelligent Control: Aspects of Fuzzy Logic and Neural Nets
Intelligent Control: Aspects of Fuzzy Logic and Neural Nets
Reputation and endorsement for web services
ACM SIGecom Exchanges - Chains of commitment
Belief, information acquisition, and trust in multi-agent systems: a modal logic formulation
Artificial Intelligence
Agent-mediated electronic commerce: a survey
The Knowledge Engineering Review
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 2
Using arguments for making decisions: a possibilistic logic approach
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
Explaining qualitative decision under uncertainty by argumentation
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
A fuzzy approach to a belief-based trust computation
AAMAS'02 Proceedings of the 2002 international conference on Trust, reputation, and security: theories and practice
Some thoughts on using argumentation to handle trust
CLIMA'11 Proceedings of the 12th international conference on Computational logic in multi-agent systems
A socio-cognitive model of trust using argumentation theory
International Journal of Approximate Reasoning
Arguing about social evaluations: From theory to experimentation
International Journal of Approximate Reasoning
Hi-index | 0.00 |
In an open Multi-Agent System, the goals of agents acting on behalf of their owners often conflict with each other. Therefore, a personal agent protecting the interest of a single user cannot always rely on them. Consequently, such a personal agent needs to be able to reason about trusting (information or services provided by) other agents. Existing algorithms that perform such reasoning mainly focus on the immediate utility of a trusting decision, but do not provide an explanation of their actions to the user. This may hinder the acceptance of agent-based technologies in sensitive applications where users need to rely on their personal agents. Against this background, we propose a new approach to trust based on argumentation that aims to expose the rationale behind such trusting decisions. Our solution features a separation of opponent modeling and decision making. It uses possibilistic logic to model behavior of opponents, and we propose an extension of the argumentation framework by Amgoud and Prade [1] to use the fuzzy rules within these models for well-supported decisions.