Machine Learning - Special issue on inductive transfer
REGRET: reputation in gregarious societies
Proceedings of the fifth international conference on Autonomous agents
Multiagent Reinforcement Learning: Theoretical Framework and an Algorithm
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
The Knowledge Engineering Review
Agent-based trust model involving multiple qualities
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Trust evaluation through relationship analysis
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Comparing clusterings: an axiomatic view
ICML '05 Proceedings of the 22nd international conference on Machine learning
Learning systems of concepts with an infinite relational model
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
An adaptive group-based reputation system in peer-to-peer networks
WINE'05 Proceedings of the First international conference on Internet and Network Economics
Trust-Based Classifier Combination for Network Anomaly Detection
CIA '08 Proceedings of the 12th international workshop on Cooperative Information Agents XII
Hi-index | 0.00 |
Trust learning is a crucial aspect of information exchange, negotiation, and any other kind of social interaction among autonomous agents in open systems. But most current probabilistic models for computational trust learning lack the ability to take context into account when trying to predict future behavior of interacting agents. Moreover, they are not able to transfer knowledge gained in a specific context to a related context. Humans, by contrast, have proven to be especially skilled in perceiving traits like trustworthiness in such so-called initial trust situations. The same restriction applies to most multiagent learning problems. In complex scenarios most algorithms do not scale well to large state-spaces and need numerous interactions to learn. We argue that trust related scenarios are best represented in a system of relations to capture semantic knowledge. Following recent work on nonparametric Bayesian models we propose a flexible and context sensitive way to model and learn multidimensional trust values which is particularly well suited to establish trust among strangers without prior relationship. To evaluate our approach we extend a multiagent framework by allowing agents to break an agreed interaction outcome retrospectively. The results suggest that the inherent ability to discover clusters and relationships between clusters that are best supported by the data allows to make predictions about future behavior of agents especially when initial trust is involved.