Algebraic geometrical methods for hierarchical learning machines
Neural Networks
Algebraic Analysis for Singular Statistical Estimation
ALT '99 Proceedings of the 10th International Conference on Algorithmic Learning Theory
Algebraic Analysis for Nonidentifiable Learning Machines
Neural Computation
Universal coding, information, prediction, and estimation
IEEE Transactions on Information Theory
IEEE Transactions on Neural Networks
Learning in linear neural networks: a survey
IEEE Transactions on Neural Networks
MDAI '07 Proceedings of the 4th international conference on Modeling Decisions for Artificial Intelligence
Experimental Analysis of Exchange Ratio in Exchange Monte Carlo Method
Neural Information Processing
Equations of states in singular statistical estimation
Neural Networks
On a singular point to contribute to a learning coefficient and weighted resolution of singularities
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Experimental study of ergodic learning curve in hidden Markov models
ICONIP'08 Proceedings of the 15th international conference on Advances in neuro-information processing - Volume Part I
The Journal of Machine Learning Research
On generalization error of self-organizing map
ICONIP'10 Proceedings of the 17th international conference on Neural information processing: models and applications - Volume Part II
The Journal of Machine Learning Research
Analytic equivalence of bayes a posteriori distributions
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
A widely applicable Bayesian information criterion
The Journal of Machine Learning Research
Hi-index | 0.00 |
Reduced rank regression extracts an essential information from examples of input-output pairs. It is understood as a three-layer neural network with linear hidden units. However, reduced rank approximation is a non-regular statistical model which has a degenerate Fisher information matrix. Its generalization error had been left unknown even in statistics. In this paper, we give the exact asymptotic form of its generalization error in Bayesian estimation, based on resolution of learning machine singularities. For this purpose, the maximum pole of the zeta function for the learning theory is calculated. We propose a new method of recursive blowing-ups which yields the complete desingularization of the reduced rank approximation.