Stochastic complexities of reduced rank regression in Bayesian estimation

  • Authors:
  • Miki Aoyagi;Sumio Watanabe

  • Affiliations:
  • Department of Mathematics, Sophia University, 7-1 Kioi-cho, Chiyoda-ku, Tokyo 102-8554, Japan;Precision and Intelligence Laboratory, Tokyo Institute of Technology, 4259 Nagatsuda, Midori-ku, Yokohama 226-8503, Japan

  • Venue:
  • Neural Networks
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Reduced rank regression extracts an essential information from examples of input-output pairs. It is understood as a three-layer neural network with linear hidden units. However, reduced rank approximation is a non-regular statistical model which has a degenerate Fisher information matrix. Its generalization error had been left unknown even in statistics. In this paper, we give the exact asymptotic form of its generalization error in Bayesian estimation, based on resolution of learning machine singularities. For this purpose, the maximum pole of the zeta function for the learning theory is calculated. We propose a new method of recursive blowing-ups which yields the complete desingularization of the reduced rank approximation.