Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Theory refinement on Bayesian networks
Proceedings of the seventh conference (1991) on Uncertainty in artificial intelligence
Adaptive Probabilistic Networks with Hidden Variables
Machine Learning - Special issue on learning with probabilistic representations
Bayesian Networks and Decision Graphs
Bayesian Networks and Decision Graphs
Stochastic Local Algorithms for Learning Belief Networks: Searching in the Space of the Orderings
ECSQARU '01 Proceedings of the 6th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
Equivalence and synthesis of causal models
UAI '90 Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence
Optimal structure identification with greedy search
The Journal of Machine Learning Research
Learning Bayesian Networks
Journal of Artificial Intelligence Research
Learning bayesian network structure from massive datasets: the «sparse candidate« algorithm
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
A Fast Hill-Climbing Algorithm for Bayesian Networks Structure Learning
ECSQARU '07 Proceedings of the 9th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
Data Mining and Knowledge Discovery
Hi-index | 0.00 |
The dominant approach for learning Bayesian networks from data is based on the use of a scoring metric, that evaluates the fitness of any given candidate network to the data, and a search procedure, that explores the space of possible solutions. The most used method inside this family is (iterated) hill climbing, because its good trade-off between CPU requirements, accuracy of the obtained model, and ease of implemetation. In this paper we focus on the searh space of dags and in the use of hill climbing as search engine. Our proposal consists in the reduction of the candidate dags or neighbors to be explored at each iteration, making the method more efficient on CPU time, but without decreasing the quality of the model discovered. Thus, initially the parent set for each variable is not restricted and so all the neighbors are explored, but during this exploration we take advantage of locally consistent metrics properties and remove some nodes from the set of candidate parents, constraining in this way the process for subsequent iterations. We show the benefits of our proposal by carrying out several experiments in three different domains.