A Trust-Based Multiagent System

  • Authors:
  • Richard Seymour;Gilbert L. Peterson

  • Affiliations:
  • -;-

  • Venue:
  • CSE '09 Proceedings of the 2009 International Conference on Computational Science and Engineering - Volume 03
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Cooperative agent systems often do not account for sneaky agents who are willing to cooperate when the stakes are low and take selfish, greedy actions when the rewards rise. Trust modeling often focuses on identifying the appropriate trust level for the other agents in the environment and then using these levels to determine how to interact with each agent. Adding trust to an interactive partially observable Markov decision process (I-POMDP) allows trust levels to be continuously monitored and corrected enabling agents to make better decisions. The addition of trust modeling increases the decision process calculations, and solves more complex trust problems that are representative of the human world. The modified I-POMDP reward function and belief models can be used to accurately track the trust levels of agents with hidden agendas. Testing demonstrates that agents quickly identify the hidden trust levels to mitigate the impact of a deceitful agent.