A statistical approach to learning and generalization in layered neural networks
COLT '89 Proceedings of the second annual workshop on Computational learning theory
Asymptotic Model Selection for Naive Bayesian Networks
The Journal of Machine Learning Research
Algebraic Analysis for Nonidentifiable Learning Machines
Neural Computation
Stochastic Complexities of Gaussian Mixtures in Variational Bayesian Approximation
The Journal of Machine Learning Research
A model selection method based on bound of learning coefficient
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part II
Stochastic complexity of bayesian networks
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
A decision-theoretic extension of stochastic complexity and its applications to learning
IEEE Transactions on Information Theory
IEEE Transactions on Neural Networks
On generalization error of self-organizing map
ICONIP'10 Proceedings of the 17th international conference on Neural information processing: models and applications - Volume Part II
Hi-index | 0.01 |
Statistical learning machines that have singularities in the parameter space, such as hidden Markov models, Bayesian networks, and neural networks, are widely used in the field of information engineering. Singularities in the parameter space determine the accuracy of estimation in the Bayesian scenario. The Newton diagram in algebraic geometry is recognized as an effective method by which to investigate a singularity. The present paper proposes a new technique to plug the diagram in the Bayesian analysis. The proposed technique allows the generalization error to be clarified and provides a foundation for an efficient model selection. We apply the proposed technique to mixtures of binomial distributions.