Learning Pseudo-independent Models: Analytical and Experimental Results
AI '00 Proceedings of the 13th Biennial Conference of the Canadian Society on Computational Studies of Intelligence: Advances in Artificial Intelligence
Local Score Computation in Learning Belief Networks
AI '01 Proceedings of the 14th Biennial Conference of the Canadian Society on Computational Studies of Intelligence: Advances in Artificial Intelligence
Toward better scoring metrics for pseudo-independent models: Research Articles
International Journal of Intelligent Systems - Uncertain Reasoning (Part 1)
Learning the structure of dynamic probabilistic networks
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Critical remarks on single link search in learning belief networks
UAI'96 Proceedings of the Twelfth international conference on Uncertainty in artificial intelligence
RSFDGrC'05 Proceedings of the 10th international conference on Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing - Volume Part I
Hi-index | 0.00 |
A type of problem domains known as pseudo-independent (PI) models poses difficulty for common learning methods, which are based on the single-link lookahead search. To learn this type of domain models, a method called the multiple-link lookahead search is needed. An improved result can be obtained by incorporating model complexity into a scoring metric to explicitly trade off model accuracy for complexity and vice versa during selection of the best model among candidates at each learning step. Previous studies found the complexity formulae for full PI models (the simplest type of PI models) and for atomic PI models (PI models without submodels). This study presents the complexity formula for non-atomic PI models, which are more complex than full or atomic PI models, yet more general. Together with the previous results, this study completes the major theoretical work for the new learning algorithm that combines complexity and accuracy.