Information flow: the logic of distributed systems
Information flow: the logic of distributed systems
Foundations of Inductive Logic Programming
Foundations of Inductive Logic Programming
Reputation in Artificial Societies: Social Beliefs for Social Order
Reputation in Artificial Societies: Social Beliefs for Social Order
Determining Semantic Similarity among Entity Classes from Different Ontologies
IEEE Transactions on Knowledge and Data Engineering
Supporting Trust in Virtual Communities
HICSS '00 Proceedings of the 33rd Hawaii International Conference on System Sciences-Volume 6 - Volume 6
Social ReGreT, a reputation model based on social relations
ACM SIGecom Exchanges - Chains of commitment
Review on Computational Trust and Reputation Models
Artificial Intelligence Review
Towards a functional ontology of reputation
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
TRAVOS: Trust and Reputation in the Context of Inaccurate Information Sources
Autonomous Agents and Multi-Agent Systems
A survey of trust and reputation systems for online service provision
Decision Support Systems
Ontology-Based Service Representation and Selection
IEEE Transactions on Knowledge and Data Engineering
Trust Modeling with Context Representation and Generalized Identities
CIA '07 Proceedings of the 11th international workshop on Cooperative Information Agents XI
Institutionalising ontology-based semantic integration
Applied Ontology
A Comparison between Neural Network Methods for Learning Aggregate Functions
DS '08 Proceedings of the 11th International Conference on Discovery Science
Bayesian reputation modeling in E-marketplaces sensitive to subjecthity, deception and change
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
Improving the efficiency of inductive logic programming through the use of query packs
Journal of Artificial Intelligence Research
Peer-to-Peer Computing: Principles and Applications
Peer-to-Peer Computing: Principles and Applications
A survey of collaborative filtering techniques
Advances in Artificial Intelligence
Combining statistics and arguments to compute trust
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
A social-network defence against whitewashing
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Inductively generated trust alignments based on shared interactions
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
An Argumentation-Based Dialog for Social Evaluations Exchange
Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
Trust Theory: A Socio-Cognitive and Computational Model
Trust Theory: A Socio-Cognitive and Computational Model
Logical and Relational Learning
Logical and Relational Learning
Reputation-based decisions for logic-based cognitive agents
Autonomous Agents and Multi-Agent Systems
An interaction-based approach to semantic alignment
Web Semantics: Science, Services and Agents on the World Wide Web
The role of the environment in agreement technologies
Artificial Intelligence Review
Trust models and applications in communication and multi-agent systems
International Journal of Knowledge-based and Intelligent Engineering Systems - Selected papers of KES2012-Part 2 of 2
Hi-index | 0.00 |
In open multi-agent systems trust models are an important tool for agents to achieve effective interactions. However, in these kinds of open systems, the agents do not necessarily use the same, or even similar, trust models, leading to semantic differences between trust evaluations in the different agents. Hence, to successfully use communicated trust evaluations, the agents need to align their trust models. We explicate that currently proposed solutions, such as common ontologies or ontology alignment methods, lead to additional problems and propose a novel approach. We show how the trust alignment can be formed by considering the interactions that agents share and describe a mathematical framework to formulate precisely how the interactions support trust evaluations for both agents. We show how this framework can be used in the alignment process and explain how an alignment should be learned. Finally, we demonstrate this alignment process in practice, using a first-order regression algorithm, to learn an alignment and test it in an example scenario.