Nonlinear Markov networks for continuous variables
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Probabilistic Networks and Expert Systems
Probabilistic Networks and Expert Systems
A comparison of scientific and engineering criteria for Bayesian modelselection
Statistics and Computing
Preventing "Overfitting" of Cross-Validation Data
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Model Selection Criteria for Learning Belief Nets: An Empirical Comparison
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
CMSB '03 Proceedings of the First International Workshop on Computational Methods in Systems Biology
Optimal structure identification with greedy search
The Journal of Machine Learning Research
A study of cross-validation and bootstrap for accuracy estimation and model selection
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Learning the structure of dynamic probabilistic networks
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Knowledge and Information Systems
Dynamic inference control in privacy preference enforcement
Proceedings of the 2006 International Conference on Privacy, Security and Trust: Bridge the Gap Between PST Technologies and Business Services
Temporal context lie detection and generation
SDM'06 Proceedings of the Third VLDB international conference on Secure Data Management
Hi-index | 0.10 |
We study cross-validation as a scoring criterion for learning dynamic Bayesian network models that generalize well. We argue that cross-validation is more suitable than the Bayesian scoring criterion for one of the most common interpretations of generalization. We confirm this by carrying out an experimental comparison of cross-validation and the Bayesian scoring criterion, as implemented by the Bayesian Dirichlet metric and the Bayesian information criterion. The results show that cross-validation leads to models that generalize better for a wide range of sample sizes.