Efficient Approximations for the MarginalLikelihood of Bayesian Networks with Hidden Variables
Machine Learning - Special issue on learning with probabilistic representations
Using MPI (2nd ed.): portable parallel programming with the message-passing interface
Using MPI (2nd ed.): portable parallel programming with the message-passing interface
A Case for NOW (Networks of Workstations)
IEEE Micro
A Distributed Learning Algorithm for Bayesian Inference Networks
IEEE Transactions on Knowledge and Data Engineering
Large-Sample Learning of Bayesian Networks is NP-Hard
The Journal of Machine Learning Research
Learning Bayesian networks from incomplete data with stochastic search algorithms
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
The Bayesian structural EM algorithm
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Exploring parallelism in learning belief networks
UAI'97 Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence
Preliminary findings of visualization of the interruptible moment
HPCS'09 Proceedings of the 23rd international conference on High Performance Computing Systems and Applications
Review: learning bayesian networks: Approaches and issues
The Knowledge Engineering Review
Hi-index | 0.00 |
Computing the expected statistics is the main bottleneck in learning Bayesian networks in large-scale problem domains. This paper presents a parallel learning algorithm, PL-SEM, for learning Bayesian networks, based on an existing structural EM algorithm (SEM). Since the computation of the expected statistics is in the parametric learning part of the SEM algorithm, PLSEM exploits a parallel EM algorithm to compute the expected statistics. The parallel EM algorithm parallelizes the E-step and M-step. At the E-step, PLSEM parallel computes the expected statistics of each sample; and at the M-step, with the conditional independence of Bayesian networks and the expected statistics computed at the E-step, PL-SEM exploits the decomposition property of the likelihood function under the completed data to parallel estimate each local likelihood function. PL-SEM effectively computes the expected statistics, and greatly reduces the time complexity of learning Bayesian networks.