Planning and evaluating multiagent influences under reward uncertainty

  • Authors:
  • Stefan Witwicki;Inn-Tung Chen;Edmund Durfee;Satinder Singh

  • Affiliations:
  • GAIPS/INESC-ID, Instituto Superior Técnico, UTL (Porto Salvo, Portugal);University of Michigan (Ann Arbor, MI);University of Michigan (Ann Arbor, MI);University of Michigan (Ann Arbor, MI)

  • Venue:
  • Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Forming commitments about abstract influences that agents can exert on one another has shown promise in improving the tractability of multiagent coordination under uncertainty. We now extend this approach to domains with meta-level reward-model uncertainty. Intuitively, an agent may actually improve collective performance by forming a weaker commitment that allows more latitude to adapt its policy as it refines its reward model. To account for reward uncertainty as such, we introduce and contrast three new techniques.