Formalizing Multi-Agent POMDP's in the context of network routing

  • Authors:
  • Bharaneedharan Rathnasabapathy;Piotr Gmytrasiewicz

  • Affiliations:
  • -;-

  • Venue:
  • HICSS '03 Proceedings of the 36th Annual Hawaii International Conference on System Sciences (HICSS'03) - Track 9 - Volume 9
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper uses partially observable Markov decisionprocesses (POMDP's) as a basic framework for Multi-Agentplanning. We distinguish three perspectives: first oneis that of an omniscient agent that has access to the globalstate of the system, second one is the perspective of an individualagent that has access only to its local state, andthe third one is the perspective of an agent that models thestates of information of the other agents. We detail how thefirst perspective differs from the other two due to the partialobservability. POMDP's allow us to formally define thenotion of optimal actions in each perspective, and to quantifythe loss of performance due to partial observability, andpossible gain in performance due to intelligent informationexchange between the agents. As an example we considerthe domain of agents in a distributed information network.There, agents have to decide how to route packets and howto share information with other agents. Though almost allrouting protocols have been formulated based on detailedstudy of the functional parameters in the system, there hasbeen no clear formal representation for optimality. We arguethat the various routing protocols should fall out as differentapproximations to policies (optimal solutions) in sucha framework. Our approach also proves critical and usefulfor the computation of error bounds due to approximationsused in practical routing algorithms. Each routing protocolis a conditional plan that involves physical actions, whichchange the physical state of the system, and actions that explicitlyexchange information.