Scaling up inductive learning with massive parallelism
Machine Learning
an entropy-driven system for construction of probabilistic expert systems from databases
UAI '90 Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence
Critical remarks on single link search in learning belief networks
UAI'96 Proceedings of the Twelfth international conference on Uncertainty in artificial intelligence
A Distributed Learning Algorithm for Bayesian Inference Networks
IEEE Transactions on Knowledge and Data Engineering
A parallel algorithm for learning Bayesian networks
PAKDD'07 Proceedings of the 11th Pacific-Asia conference on Advances in knowledge discovery and data mining
Hi-index | 0.00 |
It has been shown that a class of probabilistic domain models cannot be learned correctly by several existing algorithms which employ a single-link lookahead search. When a multilink lookahead search is used, the computational complexity of the learning algorithm increases. We study how to use parallelism to tackle the increased complexity in learning such models and to speed up learning in large domains. An algorithm is proposed to decompose the learning task for parallel processing. A further task decomposition is used to balance load among processors and to increase the speed-up and efficiency. For learning from very large datasets, we present a regrouping of the available processors such that slow data access through file can be replaced by fast memory access. Our implementation in a parallel computer demonstrates the effectiveness of the algorithm.