Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Elements of information theory
Elements of information theory
The upward bias in measures of information derived from limited data samples
Neural Computation
Causality: models, reasoning, and inference
Causality: models, reasoning, and inference
Simulation Modeling and Analysis
Simulation Modeling and Analysis
Learning Bayesian networks from data: an information-theory based approach
Artificial Intelligence
Optimal structure identification with greedy search
The Journal of Machine Learning Research
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
The Journal of Machine Learning Research
Estimating High-Dimensional Directed Acyclic Graphs with the PC-Algorithm
The Journal of Machine Learning Research
Bayesian Network Structure Learning by Recursive Autonomy Identification
The Journal of Machine Learning Research
The Journal of Machine Learning Research
Bayesian networks and information theory for audio-visual perception modeling
Biological Cybernetics
Constructor: a system for the induction of probabilistic models
AAAI'90 Proceedings of the eighth National conference on Artificial intelligence - Volume 2
The Journal of Machine Learning Research
Comparing Bayesian network classifiers
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
A hybrid anytime algorithm for the construction of causal models from sparse data
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Causal inference and causal explanation with background knowledge
UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence
Theory refinement on Bayesian networks
UAI'91 Proceedings of the Seventh conference on Uncertainty in Artificial Intelligence
Hi-index | 0.00 |
Thresholding a measure in conditional independence (CI) tests using a fixed value enables learning and removing edges as part of learning a Bayesian network structure. However, the learned structure is sensitive to the threshold that is commonly selected: 1) arbitrarily; 2) irrespective of characteristics of the domain; and 3) fixed for all CI tests. We analyze the impact on mutual information - a CI measure - of factors, such as sample size, degree of variable dependence, and variables' cardinalities. Following, we suggest to adaptively threshold individual tests based on the factors. We show that adaptive thresholds better distinguish between pairs of dependent variables and pairs of independent variables and enable learning structures more accurately and quickly than when using fixed thresholds.