Neural Computation
Neural Computation
Statistical theory of learning curves under entropic loss criterion
Neural Computation
Algebraic geometrical methods for hierarchical learning machines
Neural Networks
Reinforcement Learning with Factored States and Actions
The Journal of Machine Learning Research
Asymptotic Model Selection for Naive Bayesian Networks
The Journal of Machine Learning Research
Algebraic Analysis for Nonidentifiable Learning Machines
Neural Computation
Restricted Boltzmann machines for collaborative filtering
Proceedings of the 24th international conference on Machine learning
Algebraic Geometry and Statistical Learning Theory
Algebraic Geometry and Statistical Learning Theory
A decision-theoretic extension of stochastic complexity and its applications to learning
IEEE Transactions on Information Theory
IEEE Transactions on Neural Networks
The Journal of Machine Learning Research
Hi-index | 0.00 |
In this paper, we consider the asymptotic form of the generalization error for the restricted Boltzmann machine in Bayesian estimation. It has been shown that obtaining the maximum pole of zeta functions is related to the asymptotic form of the generalization error for hierarchical learning models (Watanabe, 2001a,b). The zeta function is defined by using a Kullback function. We use two methods to obtain the maximum pole: a new eigenvalue analysis method and a recursive blowing up process. We show that these methods are effective for obtaining the asymptotic form of the generalization error of hierarchical learning models.