Neural Computation
Neural Computation
Statistical theory of learning curves under entropic loss criterion
Neural Computation
Algebraic geometrical methods for hierarchical learning machines
Neural Networks
Asymptotic Model Selection for Naive Bayesian Networks
The Journal of Machine Learning Research
Algebraic Analysis for Nonidentifiable Learning Machines
Neural Computation
Algebraic Geometry and Statistical Learning Theory
Algebraic Geometry and Statistical Learning Theory
Equations of states in singular statistical estimation
Neural Networks
Design of exchange Monte Carlo method for Bayesian learning in normal mixture models
ICONIP'08 Proceedings of the 15th international conference on Advances in neuro-information processing - Volume Part I
The Journal of Machine Learning Research
A decision-theoretic extension of stochastic complexity and its applications to learning
IEEE Transactions on Information Theory
Universal coding, information, prediction, and estimation
IEEE Transactions on Information Theory
Learning efficiency of redundant neural networks in Bayesian estimation
IEEE Transactions on Neural Networks
Exchange Monte Carlo Sampling From Bayesian Posterior for Singular Learning Machines
IEEE Transactions on Neural Networks
A widely applicable Bayesian information criterion
The Journal of Machine Learning Research
Hi-index | 0.00 |
The term algebraic statistics arises from the study of probabilistic models and techniques for statistical inference using methods from algebra and geometry (Sturmfels, 2009). The purpose of our study is to consider the generalization error and stochastic complexity in learning theory by using the log-canonical threshold in algebraic geometry. Such thresholds correspond to the main term of the generalization error in Bayesian estimation, which is called a learning coefficient (Watanabe, 2001a, 2001b). The learning coefficient serves to measure the learning efficiencies in hierarchical learning models. In this letter, we consider learning coefficients for Vandermonde matrix-type singularities, by using a new approach: focusing on the generators of the ideal, which defines singularities. We give tight new bound values of learning coefficients for the Vandermonde matrix-type singularities and the explicit values with certain conditions. By applying our results, we can show the learning coefficients of three-layered neural networks and normal mixture models.