Inference in hybrid networks: theoretical limits and practical algorithms

  • Authors:
  • Uri Lerner;Ronald Parr

  • Affiliations:
  • Computer Science Department, Stanford University;Computer Science Department, Duke University

  • Venue:
  • UAI'01 Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

An important subclass of hybrid Bayesian networks are those that represent Conditional Linear Gaussian (CLG) distributions -- a distribution with a multivariate Gaussian component for each instantiation of the discrete variables. In this paper we explore the problem of inference in CLGs, and provide complexity resuits for an important class of CLGs, which includes Switching Kalman Filters. In particular, we prove that even if the CLG is restricted to an extremely simple structure of a polytree, the inference task is NP-hard. Furthermore, we show that, unless P=NP, even approximate inference on these simple networks is intractable. Given the often prohibitive computational cost of even approximate inference, we must take advantage of special domain properties which may enable efficient inference. We concentrate on the fault diagnosis domain, and explore several approximate inference algorithms. These algorithms try to find a small subset of Gaussians which are a good approximation to the full mixture distribution. We consider two Monte Carlo approaches and a novel approach that enumerates mixture components in order of prior probability. We compare these methods on a variety of problems and show that our novel algorithm is very promising for large, hybrid diagnosis problems.