Experimental Bayesian Generalization Error of Non-regular Models under Covariate Shift

  • Authors:
  • Keisuke Yamazaki;Sumio Watanabe

  • Affiliations:
  • Precision and Intelligence Laboratory, Tokyo Institute of Technology, , Midori-ku, Japan 226-8503;Precision and Intelligence Laboratory, Tokyo Institute of Technology, , Midori-ku, Japan 226-8503

  • Venue:
  • Neural Information Processing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the standard setting of statistical learning theory, we assume that the training and test data are generated from the same distribution. However, this assumption cannot hold in many practical cases, e.g., brain-computer interfacing, bioinformatics, etc. Especially, changing input distribution in the regression problem often occurs, and is known as the covariate shift. There are a lot of studies to adapt the change, since the ordinary machine learning methods do not work properly under the shift. The asymptotic theory has also been developed in the Bayesian inference. Although many effective results are reported on statistical regular ones, the non-regular models have not been considered well. This paper focuses on behaviors of non-regular models under the covariate shift. In the former study [1], we formally revealed the factors changing the generalization error and established its upper bound. We here report that the experimental results support the theoretical findings. Moreover it is observed that the basis function in the model plays an important role in some cases.