An analysis of generalization error in relevant subtask learning

  • Authors:
  • Keisuke Yamazaki;Samuel Kaski

  • Affiliations:
  • Precision and Intelligence Laboratory, Tokyo Institute of Technology, Yokohama, Japan;Department of Information and Computer Science, Helsinki University of Technology, TKK, Finland

  • Venue:
  • ICONIP'08 Proceedings of the 15th international conference on Advances in neuro-information processing - Volume Part I
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

A recent variant of multi-task learning uses the other tasks to help in learning a task-of-interest, for which there is too little training data. The task can be classification, prediction, or density estimation. The problem is that only some of the data of the other tasks are relevant or representative for the task-of-interest. It has been experimentally demonstrated that a generative model works well in this relevant subtask learning task. In this paper we analyze the generalization error of the model, to show that it is smaller than in standard alternatives, and to point out connections to semi-supervised learning, multi-task learning, and active learning or covariate shift.