On Universal Transfer Learning

  • Authors:
  • M. M. Mahmud

  • Affiliations:
  • Department of Computer Science, University of Illinois at Urbana Champaign, 201 N. Goodwin Avenue, Urbana, IL 61801, USA

  • Venue:
  • ALT '07 Proceedings of the 18th international conference on Algorithmic Learning Theory
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In transfer learning the aim is to solve new learning tasks using fewer examples by using information gained from solving related tasks. Existing transfer learning methods have been used successfully in practice and PAC analysis of these methods have been developed. But the key notion of relatedness between tasks has not yet been defined clearly, which makes it difficult to understand, let alone answer, questions that naturally arise in the context of transfer, such as, how much information to transfer, whether to transfer information, and how to transfer information across tasks. In this paper we look at transfer learning from the perspective of Algorithmic Information Theory, and formally solve these problems in the same sense Solomonoff Induction solves the problem of inductive inference. We define universal measures of relatedness between tasks, and use these measures to develop universally optimal Bayesian transfer learning methods.