Dynamic multiagent probabilistic inference

  • Authors:
  • Xiangdong An;Yang Xiang;Nick Cercone

  • Affiliations:
  • Faculty of Computer Science, Dalhousie University, Halifax, Nova Scotia, Canada B3H 1W5;Department of Computing and Information Science, University of Guelph, Guelph, Ontario, Canada N1G 2W1;Department of Computer Science and Engineering, York University, Toronto, Ontario, Canada M3J 1P3

  • Venue:
  • International Journal of Approximate Reasoning
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Cooperative multiagent probabilistic inference can be applied in areas such as building surveillance and complex system diagnosis to reason about the states of the distributed uncertain domains. In the static cases, multiply sectioned Bayesian networks (MSBNs) have provided a solution when interactions within each agent are structured and those among agents are limited. However, in the dynamic cases, the agents' inference will not guarantee exact posterior probabilities if each agent evolves separately using a single agent dynamic Bayesian network (DBN). Nevertheless, due to the discount of the past, we may not have to use the whole history of a domain to reason about its current state. In this paper, we propose to reason about the state of a distributed dynamic domain period by period using an MSBN. To reduce the influence of the ignored history on the posterior probabilities to a minimum, we propose to observe as many observable variables as possible in the modeled history. Due to the limitations of the problem domains, it could be very costly to observe all observable variables. We present a distributed algorithm to compute all observable variables that are relevant to our concerns. Experimental results on the relationship between the computational complexity and the length of the represented history, and effectiveness of the approach are presented.