Numerical recipes in C (2nd ed.): the art of scientific computing
Numerical recipes in C (2nd ed.): the art of scientific computing
Algebraic geometrical methods for hierarchical learning machines
Neural Networks
Algebraic Analysis for Nonidentifiable Learning Machines
Neural Computation
Hi-index | 0.00 |
Learning machines such as neural networks, Gaussian mixtures, Bayes networks, hidden Markov models, and Boltzmann machines are called singular learning machines, which have been applied to many real problems such as pattern recognition, time-series prediction, and system control. However, these learning machines have singular points which are attributable to their hierarchical structures or symmetry properties. Hence, the maximum likelihood estimators do not have asymptotic normality, and conventional asymptotic theory for statistical regular models can not be applied. Therefore, theoretical optimal model selections or designs involve algebraic geometrical analysis. The algebraic geometrical analysis requires blowing up, which is to obtain maximum poles of zeta functions in learning theory, however, it is hard for complex learning machines. In this paper, a new method which obtains the maximum poles of zeta functions in learning theory by numerical computations is proposed, and its effectiveness is shown by experimental results.