Simulation of multiprocessors: accuracy and performance
Simulation of multiprocessors: accuracy and performance
Parallel hierarchical N-body methods and their implications for multiprocessors
Parallel hierarchical N-body methods and their implications for multiprocessors
The directory-based cache coherence protocol for the DASH multiprocessor
ISCA '90 Proceedings of the 17th annual international symposium on Computer Architecture
A parallel Lauritzen-Spiegelhalter algorithm for probabilistic inference
Proceedings of the 1994 ACM/IEEE conference on Supercomputing
Context-specific independence in Bayesian networks
UAI'96 Proceedings of the Twelfth international conference on Uncertainty in artificial intelligence
Non-Strict Cache Coherence: Exploiting Data-Race Tolerance in Emerging Applications
ICPP '00 Proceedings of the Proceedings of the 2000 International Conference on Parallel Processing
Scalable Parallel Implementation of Exact Inference in Bayesian Networks
ICPADS '06 Proceedings of the 12th International Conference on Parallel and Distributed Systems - Volume 1
A join tree probability propagation architecture for semantic modeling
Journal of Intelligent Information Systems
A branch-and-bound algorithm for MDL learning Bayesian networks
UAI'00 Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence
Logarithmic time parallel Bayesian inference
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Computational advantages of relevance reasoning in Bayesian belief networks
UAI'97 Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence
Hi-index | 4.10 |
Probabilistic inference is an important technique for reasoning under uncertainty in such areas as medicine, software fault diagnosis, speech recognition, and automated vision. Although it could contribute to many more applications, probabilistic inference is extremely computationally intensive, making it impractical for applications that involve large databases. One way to address this problem is to take advantage of the technique's available parallelism. The authors evaluated the effectiveness of doing probabilistic inference in parallel. They found that parallel probabilistic inference presents interesting tradeoffs between load balance and data locality. These factors are key to successful parallel applications and yet are often difficult to optimize. The authors attempted to find the optimal tradeoff by writing two parallel programs--static and dynamic--to exploit different forms of parallelism available in probabilistic inference. Both programs were tested on a 32-processor Stanford Dash and a 16-processor SGI Challenge XL, using six medium belief networks to evaluate the programs. In a series of experiments and analyses, the results were evaluated to see how computation time was used and how data locality affected performance. The authors then tested the static program using a large medical diagnosis network. The static program, which maximizes data locality, outperformed the dynamic program. It also reduced the time probabilistic inference takes on the large medical network. The results suggest that maintaining good data locality is crucial for obtaining good speedups and that the speedups attained will depend on the network's structure and size.