Applied cryptography (2nd ed.): protocols, algorithms, and source code in C
Applied cryptography (2nd ed.): protocols, algorithms, and source code in C
Causality: models, reasoning, and inference
Causality: models, reasoning, and inference
Communications of the ACM
Social trust: a cognitive approach
Trust and deception in virtual societies
Principles of Trust for MAS: Cognitive Anatomy, Social Importance, and Quantification
ICMAS '98 Proceedings of the 3rd International Conference on Multi Agent Systems
The Knowledge Engineering Review
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Context and Time Based Riskiness Assessment for Decision Making
AICT-ICIW '06 Proceedings of the Advanced Int'l Conference on Telecommunications and Int'l Conference on Internet and Web Applications and Services
TRAVOS: Trust and Reputation in the Context of Inaccurate Information Sources
Autonomous Agents and Multi-Agent Systems
An integrated trust and reputation model for open multi-agent systems
Autonomous Agents and Multi-Agent Systems
A context-aware approach for service selection using ontologies
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Decision-making in an embedded reasoning system
IJCAI'89 Proceedings of the 11th international joint conference on Artificial intelligence - Volume 2
Being trusted in a social network: trust as relational capital
iTrust'06 Proceedings of the 4th international conference on Trust Management
On the hidden terminal problem in multi-rate ad hoc wireless networks
ICOIN'05 Proceedings of the 2005 international conference on Information Networking: convergence in broadband and mobile networking
Normative multiagent systems and trust dynamics
Trusting Agents for Trusting Electronic Societies
Hi-index | 0.00 |
To estimate how much an agent can be trusted, its trustworthiness needs to be assessed. Usually, poor performance of an agent leads to a decrease of trust in that agent. This is not always reasonable. If the environment interferes with the performance, the agent is possibly not to blame for the failure. We examine which failures can be called excusable and hence must not be seen as bad performances. Knowledge about these failures makes assessments of trustworthiness more accurate. In order to approach a formal definition of excusableness , we introduce a generic formalism for describing environments of Multi-Agent Systems. This formalism provides a basis for the definition of environmental interference . We identify the remaining criteria for excusableness and give a formal definition for it. Our analysis reveals that environmental interference and a strong commitment of the performing agent do not suffice to make a failure excusable .