Empirical methods for artificial intelligence
Empirical methods for artificial intelligence
Learning in graphical models
Collaborative reputation mechanisms for electronic marketplaces
Decision Support Systems - Special issue for business to business electronic commerce, issues and solutions
An agent-based approach for building complex software systems
Communications of the ACM
Social ReGreT, a reputation model based on social relations
ACM SIGecom Exchanges - Chains of commitment
Belief, information acquisition, and trust in multi-agent systems: a modal logic formulation
Artificial Intelligence
Information Theory, Inference & Learning Algorithms
Information Theory, Inference & Learning Algorithms
The Knowledge Engineering Review
Beyond proof-of-compliance: security analysis in trust management
Journal of the ACM (JACM)
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Trust evaluation through relationship analysis
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
TRAVOS: Trust and Reputation in the Context of Inaccurate Information Sources
Autonomous Agents and Multi-Agent Systems
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
A survey of trust and reputation systems for online service provision
Decision Support Systems
ARES '07 Proceedings of the The Second International Conference on Availability, Reliability and Security
Rumours and reputation: evaluating multi-dimensional trust within a decentralised reputation system
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Authorization in trust management: Features and foundations
ACM Computing Surveys (CSUR)
A statistical relational model for trust learning
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 2
ACM Computing Surveys (CSUR)
Bayesian reputation modeling in E-marketplaces sensitive to subjecthity, deception and change
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
A multi-dimensional trust model for heterogeneous contract observations
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
Obtaining reliable feedback for sanctioning reputation mechanisms
Journal of Artificial Intelligence Research
Dynamic verification of trust in distributed open systems
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Formal trust model for multiagent systems
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Journal of Artificial Intelligence Research
StereoTrust: a group based personalized trust model
Proceedings of the 18th ACM conference on Information and knowledge management
Probabilistic prediction of peers' performance in P2P networks
Engineering Applications of Artificial Intelligence
Towards incentive-compatible reputation management
AAMAS'02 Proceedings of the 2002 international conference on Trust, reputation, and security: theories and practice
A probabilistic model for trust and reputation
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Bootstrapping trust evaluations through stereotypes
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Norms Evaluation through Reputation Mechanisms for BDI Agents
Proceedings of the 2010 conference on Artificial Intelligence Research and Development: Proceedings of the 13th International Conference of the Catalan Association for Artificial Intelligence
Developing strategies for the ART domain
CAEPIA'09 Proceedings of the Current topics in artificial intelligence, and 13th conference on Spanish association for artificial intelligence
Establishing Trust in Cloud Computing
IT Professional
Statistical relational learning of trust
Machine Learning
Intertemporal Discount Factors as a Measure of Trustworthiness in Electronic Commerce
IEEE Transactions on Knowledge and Data Engineering
A probabilistic approach for maintaining trust based on evidence
Journal of Artificial Intelligence Research
Argumentation-based reasoning in agents with varying degrees of trust
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
iCLUB: an integrated clustering-based approach to improve the robustness of reputation systems
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
An adaptive group-based reputation system in peer-to-peer networks
WINE'05 Proceedings of the First international conference on Internet and Network Economics
Trust decision-making in multi-agent systems
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume One
Facing openness with socio-cognitive trust and categories
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume One
Decision making matters: A better way to evaluate trust models
Knowledge-Based Systems
Trust-based role coordination in task-oriented multiagent systems
Knowledge-Based Systems
From blurry numbers to clear preferences: A mechanism to extract reputation in social networks
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
In many dynamic open systems, autonomous agents must interact with one another to achieve their goals. Such agents may be self-interested and, when trusted to perform an action, may betray that trust by not performing the action as required. Due to the scale and dynamism of these systems, agents will often need to interact with other agents with which they have little or no past experience. Each agent must therefore be capable of assessing and identifying reliable interaction partners, even if it has no personal experience with them. To this end, we present HABIT, a Hierarchical And Bayesian Inferred Trust model for assessing how much an agent should trust its peers based on direct and third party information. This model is robust in environments in which third party information is malicious, noisy, or otherwise inaccurate. Although existing approaches claim to achieve this, most rely on heuristics with little theoretical foundation. In contrast, HABIT is based exclusively on principled statistical techniques: it can cope with multiple discrete or continuous aspects of trustee behaviour; it does not restrict agents to using a single shared representation of behaviour; it can improve assessment by using any observed correlation between the behaviour of similar trustees or information sources; and it provides a pragmatic solution to the whitewasher problem (in which unreliable agents assume a new identity to avoid bad reputation). In this paper, we describe the theoretical aspects of HABIT, and present experimental results that demonstrate its ability to predict agent behaviour in both a simulated environment, and one based on data from a real-world webserver domain. In particular, these experiments show that HABIT can predict trustee performance based on multiple representations of behaviour, and is up to twice as accurate as BLADE, an existing state-of-the-art trust model that is both statistically principled and has been previously shown to outperform a number of other probabilistic trust models.