An Experimental and Theoretical Comparison of Model SelectionMethods
Machine Learning - Special issue on the eighth annual conference on computational learning theory, (COLT '95)
Change-Point Estimation Using New Minimum Message Length Approximations
PRICAI '02 Proceedings of the 7th Pacific Rim International Conference on Artificial Intelligence: Trends in Artificial Intelligence
Minimum Message Length Grouping of Ordered Data
ALT '00 Proceedings of the 11th International Conference on Algorithmic Learning Theory
MML mixture models of heterogeneous poisson processes with uniform outliers for bridge deterioration
AI'06 Proceedings of the 19th Australian joint conference on Artificial Intelligence: advances in Artificial Intelligence
Hi-index | 0.00 |
Keairns et al. (1997) in an earlier paper presented an empirical evaluation of model selection methods on a specialized version of the segmentation problem. The inference task was the estimation of a predefined Boolean function on the real interval [0,1] from a noisy random sample. Three model selection methods based on the Guaranteed Risk Minimization, Minimum Description Length (MDL) Principle and Cross Validation were evaluated on samples with varying noise levels. The authors concluded that, in general, none of the methods was superior to the others in terms of predictive accuracy. In this paper we identify an inefficiency in the MDL approach as implemented by Kearns et al. and present an extended empirical evaluation by including a revised version of the MDL method and another approach based on the Minimum Message Length (MML) principle.