Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
The topological fusion of Bayes nets
UAI '92 Proceedings of the eighth conference on Uncertainty in Artificial Intelligence
Computers and Intractability: A Guide to the Theory of NP-Completeness
Computers and Intractability: A Guide to the Theory of NP-Completeness
Optimal structure identification with greedy search
The Journal of Machine Learning Research
Large-Sample Learning of Bayesian Networks is NP-Hard
The Journal of Machine Learning Research
Consensus Genetic Maps: A Graph Theoretic Approach
CSB '05 Proceedings of the 2005 IEEE Computational Systems Bioinformatics Conference
Towards scalable and data efficient learning of Markov boundaries
International Journal of Approximate Reasoning
An application of formal argumentation: Fusing Bayesian networks in multi-agent systems
Artificial Intelligence
Probabilistic Graphical Models: Principles and Techniques - Adaptive Computation and Machine Learning
Graphical representations of consensus belief
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Finding optimal bayesian networks
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
Aggregating learned probabilistic beliefs
UAI'01 Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence
Bayesian networks from the point of view of chain graphs
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
A transformational characterization of equivalent Bayesian network structures
UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence
Some complexity considerations in the combination of belief networks
UAI'93 Proceedings of the Ninth international conference on Uncertainty in artificial intelligence
Deriving a minimal I-map of a belief network relative to a target ordering of its nodes
UAI'93 Proceedings of the Ninth international conference on Uncertainty in artificial intelligence
On local optima in learning bayesian networks
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
An experimental comparison of hybrid algorithms for bayesian network structure learning
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I
DemocraticOP: A Democratic way of aggregating Bayesian network parameters
International Journal of Approximate Reasoning
Qualitative combination of independence models
ECSQARU'13 Proceedings of the 12th European conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
Hi-index | 0.00 |
Suppose that multiple experts (or learning algorithms) provide us with alternative Bayesian network (BN) structures over a domain, and that we are interested in combining them into a single consensus BN structure. Specifically, we are interested in that the consensus BN structure only represents independences all the given BN structures agree upon and that it has as few parameters associated as possible. In this paper, we prove that there may exist several non-equivalent consensus BN structures and that finding one of them is NP-hard. Thus, we decide to resort to heuristics to find an approximated consensus BN structure. In this paper, we consider the heuristic proposed by Matzkevich and Abramson, which builds upon two algorithms, called Methods A and B, for efficiently deriving the minimal directed independence map of a BN structure relative to a given node ordering. Methods A and B are claimed to be correct although no proof is provided (a proof is just sketched). In this paper, we show that Methods A and B are not correct and propose a correction of them.