Learning coefficients of layered models when the true distribution mismatches the singularities

  • Authors:
  • Sumio Watanabe;Shun-ichi Amari

  • Affiliations:
  • Precision and Intelligence Laboratory, Tokyo Institute of Technology, Midori-ku, Yokohama, 226-8503 Japan;Laboratory for Mathematical Neuroscience, RIKEN Brain Science Institute, Wako-shi, Saitama, 351-0198, Japan

  • Venue:
  • Neural Computation
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Hierarchical learning machines such as layered neural networks have singularities in their parameter spaces. At singularities, the Fisher information matrix becomes degenerate, with the result that the conventional learning theory of regular statistical models does not hold. Recently, it was proved that if the parameter of the true distribution is contained in the singularities of the learning machine, the generalization error in Bayes estimation is asymptotically equal to λ/n, where 2λ is smaller than the dimension of the parameter and n is the number of training samples. However, the constant λ strongly depends on the local geometrical structure of singularities; hence, the generalization error is not yet clarified when the true distribution is almost but not completely contained in the singularities. In this article, in order to analyze such cases, we study the Bayes generalization error under the condition that the Kullback distance of the true distribution from the distribution represented by singularities is in proportion to 1/n and show two results. First, if the dimension of the parameter from inputs to hidden units is not larger than three, then there exists a region of true parameters such that the generalization error is larger than that of the corresponding regular model. Second, if the dimension from inputs to hidden units is larger than three, then for arbitrary true distribution, the generalization error is smaller than that of the corresponding regular model.