Representing Context for Multiagent Trust Modeling
IAT '06 Proceedings of the IEEE/WIC/ACM international conference on Intelligent Agent Technology
Agent Methods for Network Intrusion Detection and Response
HoloMAS '07 Proceedings of the 3rd international conference on Industrial Applications of Holonic and Multi-Agent Systems: Holonic and Multi-Agent Systems for Manufacturing
High-Performance Agent System for Intrusion Detection in Backbone Networks
CIA '07 Proceedings of the 11th international workshop on Cooperative Information Agents XI
Trust Modeling with Context Representation and Generalized Identities
CIA '07 Proceedings of the 11th international workshop on Cooperative Information Agents XI
Agent-Based Network Protection Against Malicious Code
CEEMAS '07 Proceedings of the 5th international Central and Eastern European conference on Multi-Agent Systems and Applications V
Collaborative Attack Detection in High-Speed Networks
CEEMAS '07 Proceedings of the 5th international Central and Eastern European conference on Multi-Agent Systems and Applications V
Trust-Based Classifier Combination for Network Anomaly Detection
CIA '08 Proceedings of the 12th international workshop on Cooperative Information Agents XII
A new evidential trust model for open communities
Computer Standards & Interfaces
Effects of changing reliability on trust of robot systems
HRI '12 Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction
Hi-index | 0.00 |
Trust management model that we present is adapted for ubiquitous devices cooperation, rather than for classic client-supplier relationship. We use fuzzy numbers to represent trust, to capture both the trust value and its uncertainty. The model contains the trust representation part, decisionmaking part and a learning part. In our representation, we define the trusted agents as a type-2 fuzzy set. In a decisionmaking part, we use the methods from the fuzzy rule computation and fuzzy control domain to take trusting decision. For trust learning, we use a strictly iterative approach, well adapted to constrained environments. We verify our model in a multi-agent simulation where the agents in the community learn to identify defecting members and progressively refuse to cooperate with them. Our simulation contains signi ficant background noise to validate model robustness.