Exploring parallelism in learning belief networks

  • Authors:
  • T. Chu;Y. Xiang

  • Affiliations:
  • Dept. Computer Science, Univ. of Regina, Regina, Sask., Canada;Dept. Computer Science, Univ. of Regina, Regina, Sask., Canada

  • Venue:
  • UAI'97 Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

It has been shown that a class of probabilistic domain models cannot be learned correctly by several existing algorithms which employ a single-link lookahead search. When a multilink lookahead search is used, the computational complexity of the learning algorithm increases. We study how to use parallelism to tackle the increased complexity in learning such models and to speed up learning in large domains. An algorithm is proposed to decompose the learning task for parallel processing. A further task decomposition is used to balance load among processors and to increase the speed-up and efficiency. For learning from very large datasets, we present a regrouping of the available processors such that slow data access through file can be replaced by fast memory access. Our implementation in a parallel computer demonstrates the effectiveness of the algorithm.