Machine Learning
Maximum of entropy for credal sets
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems
Learning Bayesian Networks
Upper entropy of credal sets. Applications to credal classification
International Journal of Approximate Reasoning
An introduction to the imprecise Dirichlet model for multinomial data
International Journal of Approximate Reasoning
A memory efficient semi-Naive Bayes classifier with grouping of cases
Intelligent Data Analysis
Imprecise probabilities for representing ignorance about a parameter
International Journal of Approximate Reasoning
Paper: Modeling by shortest data description
Automatica (Journal of IFAC)
A Method for Integrating Expert Knowledge When Learning Bayesian Networks From Data
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Hi-index | 0.00 |
In the analysis of data, the discovery of dependence relations can play a very important role. Our principal aim in this paper is to present a new score to determine when two categorical variables are independent. It can be resumed as an interval-valued score that is based on the Heckerman, Geiger, and Chickering's score, which can be used in supervised classification task. We carry out an empirical comparison with different scores to determine when two binary variables are independent. Also, we have considered the following measures: the Bayesian score metric, the Bayesian information criterion BIC, the p-value of the Chi-square test for independence and the upper entropy score based on imprecise probabilities. We will see that our new score has a behavior that it is more similar to statistical tests from small samples and to Bayesian procedures for large samples. We find this behavior very appropriate for some types of problems.