Elements of information theory
Elements of information theory
Machine Learning - Special issue on learning with probabilistic representations
Stochastic Complexity in Statistical Inquiry Theory
Stochastic Complexity in Statistical Inquiry Theory
Bayesian and Information-Theories Priors for Bayesian Network Parameters
ECML '98 Proceedings of the 10th European Conference on Machine Learning
Fisher information and stochastic complexity
IEEE Transactions on Information Theory
On predictive distributions and Bayesian networks
Statistics and Computing
NML computation algorithms for tree-structured multinomial Bayesian networks
EURASIP Journal on Bioinformatics and Systems Biology
A fast normalized maximum likelihood algorithm for multinomial data
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
On supervised selection of Bayesian networks
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
HMM-based hybrid meta-clustering ensemble for temporal data
Knowledge-Based Systems
Hi-index | 0.00 |
We analyze differences between two information-theoretically motivated approaches to statistical inference and model selection: the Minimum Description Length (MDL) principle, and the Minimum Message Length (MML) principle. Based on this analysis, we present two revised versions of MML: a pointwise estimator which gives the MML-optimal single parameter model, and a volumewise estimator which gives the MML-optimal region in the parameter space. Our empirical results suggest that with small data sets, the MDL approach yields more accurate predictions than the MML estimators. The empirical results also demonstrate that the revised MML estimators introduced here perform better than the original MML estimator suggested by Wallace and Freeman.