Generalization error of linear neural networks in an empirical bayes approach

  • Authors:
  • Shinichi Nakajima;Sumio Watanabe

  • Affiliations:
  • Tokyo Institute of Technology, Yokohama, Kanagawa, Japan and Nikon Corporation, Kumagaya, Saitama, Japan;Tokyo Institute of Technology, Yokohama, Kanagawa, Japan

  • Venue:
  • IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

It is well known that in unidentifiable models, the Bayes estimation has the advantage of generalization performance to the maximum likelihood estimation. However, accurate approximation of the posterior distribution requires huge computational costs. In this paper, we consider an empirical Bayes approach where a part of the parameters are regarded as hyperparameters, which we call a subspace Bayes approach, and theoretically analyze the generalization error of three-layer linear neural networks. We show that a subspace Bayes approach is asymptotically equivalent to a positivepart James-Stein type shrinkage estimation, and behaves similarly to the Bayes estimation in typical cases.