Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in expert systems: theory and algorithms
Probabilistic reasoning in expert systems: theory and algorithms
Simulated social control for secure Internet commerce
NSPW '96 Proceedings of the 1996 workshop on New security paradigms
The use of meta-level control for coordination in a distributed problem solving network
IJCAI'83 Proceedings of the Eighth international joint conference on Artificial intelligence - Volume 2
Dynamically learning sources of trust information: experience vs. reputation
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Teammate Selection Using Multi-dimensional Trust and Attitude Models
Trust in Agent Societies
Agents Selecting Trustworthy Recommendations in Mobile Virtual Communities
Trust in Agent Societies
Protecting buying agents in e-marketplaces by direct experience trust modelling
Knowledge and Information Systems
A versatile approach to combining trust values for making binary decisions
iTrust'06 Proceedings of the 4th international conference on Trust Management
A temporal policy for trusting information
Trusting Agents for Trusting Electronic Societies
Decentralized reputation-based trust for assessing agent reliability under aggregate feedback
Trusting Agents for Trusting Electronic Societies
Hi-index | 0.00 |
This paper introduces a multi-agent belief revision algorithm that utilizes knowledge about reliability or trustworthiness of information sources to evaluate incoming information and the sources providing that information. It also allows an agent to learn the trustworthiness of other agents using (1) dissimilarity measures (measures that show how much incorrect information from a particular information source) calculated from the proposed belief revision processes (Direct Trust Revision) and/or (2) communicated trust information from other agents (Recommended Trust Revision). A set of experiments are performed to validate and measure the performance of the proposed Trust Revision approaches. The performance (frequency response and correctness) of the proposed algorithm is analyzed in terms of delay time (the time required for the step response of an agent's belief state to reach 50 percent of the ground truth value), maximum overshoot (the largest deviation of the belief value over the ground truth value during the transient state), and steady-state error (deviation of the belief value after the transient state). The results show a design trade off in better responsiveness to system configuration or environmental changes versus resilience to noise. An agent designer may either (1) select one of the Trust Revision algorithms proposed or (2) use both of them to achieve better performance at the cost of system resource such as computation power and communication bandwidth.