Adaptive Probabilistic Networks with Hidden Variables
Machine Learning - Special issue on learning with probabilistic representations
Probabilistic Networks and Expert Systems
Probabilistic Networks and Expert Systems
Learning equivalence classes of bayesian-network structures
The Journal of Machine Learning Research
Exact Bayesian Structure Discovery in Bayesian Networks
The Journal of Machine Learning Research
Learning bayesian network structure from massive datasets: the «sparse candidate« algorithm
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Learning bayesian networks does not have to be NP-Hard
MFCS'06 Proceedings of the 31st international conference on Mathematical Foundations of Computer Science
Efficient Structure Learning of Bayesian Networks using Constraints
The Journal of Machine Learning Research
Searching optimal bayesian network structure on constraint search space: super-structure approach
JSAI-isAI'10 Proceedings of the 2010 international conference on New Frontiers in Artificial Intelligence
Multimedia Tools and Applications
An experimental comparison of hybrid algorithms for bayesian network structure learning
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I
Parameterized complexity results for exact bayesian network structure learning
Journal of Artificial Intelligence Research
Information Sciences: an International Journal
Learning optimal bayesian networks: a shortest path perspective
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
We study the problem of learning an optimal Bayesian network in a constrained search space; skeletons are compelled to be subgraphs of a given undirected graph called the super-structure. The previously derived constrained optimal search (COS) remains limited even for sparse super-structures. To extend its feasibility, we propose to divide the super-structure into several clusters and perform an optimal search on each of them. Further, to ensure acyclicity, we introduce the concept of ancestral constraints (ACs) and derive an optimal algorithm satisfying a given set of ACs. Finally, we theoretically derive the necessary and sufficient sets of ACs to be considered for finding an optimal constrained graph. Empirical evaluations demonstrate that our algorithm can learn optimal Bayesian networks for some graphs containing several hundreds of vertices, and even for super-structures having a high average degree (up to four), which is a drastic improvement in feasibility over the previous optimal algorithm. Learnt networks are shown to largely outperform state-of-the-art heuristic algorithms both in terms of score and structural hamming distance.