Artificial Intelligence
Communications of the ACM
Trust and deception in virtual societies
Trust and deception in virtual societies
IEEE Intelligent Systems
Propositional defeasible logic has linear complexity
Theory and Practice of Logic Programming
The Knowledge Engineering Review
An integrated trust and reputation model for open multi-agent systems
Autonomous Agents and Multi-Agent Systems
Certified reputation: how an agent can trust a stranger
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Bottom-Up Extraction and Trust-Based Refinement of Ontology Metadata
IEEE Transactions on Knowledge and Data Engineering
BIO logical agents: Norms, beliefs, intentions in defeasible logic
Autonomous Agents and Multi-Agent Systems
A Java Implementation of Temporal Defeasible Logic
RuleML '09 Proceedings of the 2009 International Symposium on Rule Interchange and Applications
Temporal extensions to defeasible logic
AI'07 Proceedings of the 20th Australian joint conference on Advances in artificial intelligence
Trust Theory: A Socio-Cognitive and Computational Model
Trust Theory: A Socio-Cognitive and Computational Model
Visualizing Semantic Web proofs of defeasible logic in the DR-DEVICE system
Knowledge-Based Systems
EMERALD: a multi-agent system for knowledge-based reasoning interoperability in the semantic web
SETN'10 Proceedings of the 6th Hellenic conference on Artificial Intelligence: theories, models and applications
Advanced agent discovery services
Proceedings of the 2nd International Conference on Web Intelligence, Mining and Semantics
Hi-index | 0.00 |
Multi-agent systems are considered a modern medium of communication and interaction with limited or no human intervention. As intelligent agents are gradually enriched with Semantic Web technology, their use is constantly increasing. To this end, the degree of trust that can be invested in a certain agent is recognized as a vital issue. Current trust models are mainly based on agents' direct experience (interaction trust) or reports provided by others (witness reputation). Though, lately, some combinations of them (hybrid models) were also proposed. To overcome their main drawbacks, in this paper we propose HARM, a hybrid, rule-based reputation model based on temporal defeasible logic. It combines the advantages of the hybrid approach and the benefits of a rule-based reputation modeling approach, providing a stable and realistic estimation mechanism with low bandwidth and computational complexity. Moreover, an evaluation of the reputation model is presented, demonstrating the added value of the approach.