Scalable Node-Level Computation Kernels for Parallel Exact Inference

  • Authors:
  • Yinglong Xia;Viktor K. Prasanna

  • Affiliations:
  • University of Southern California, Los Angeles;University of Southern California, Los Angeles

  • Venue:
  • IEEE Transactions on Computers
  • Year:
  • 2010

Quantified Score

Hi-index 14.98

Visualization

Abstract

In this paper, we investigate data parallelism in exact inference with respect to arbitrary junction trees. Exact inference is a key problem in exploring probabilistic graphical models, where the computation complexity increases dramatically with clique width and the number of states of random variables. We study potential table representation and scalable algorithms for node-level primitives. Based on such node-level primitives, we propose computation kernels for evidence collection and evidence distribution. A data parallel algorithm for exact inference is presented using the proposed computation kernels. We analyze the scalability of node-level primitives, computation kernels, and the exact inference algorithm using the coarse-grained multicomputer (CGM) model. According to the analysis, we achieve O(Nd_{{\cal C}}w_{{\cal C}}\prod_{j=1}^{w_{{\cal C}}}r_{{\cal C}, j}/ P) local computation time and O(N) global communication rounds using P processors, 1\le P\le {\rm max}_{{\cal C}}\prod_{j=1}^{w_{{\cal C}}}r_{{\cal C},j}, where N is the number of cliques in the junction tree; d_{{\cal C}} is the clique degree; r_{{\cal C},j} is the number of states of the jth random variable in {\cal C}; w_{{\cal C}} is the clique width; and w_{s} is the separator width. We implemented the proposed algorithm on state-of-the-art clusters. Experimental results show that the proposed algorithm exhibits almost linear scalability over a wide range.