An introduction to Kolmogorov complexity and its applications (2nd ed.)
An introduction to Kolmogorov complexity and its applications (2nd ed.)
Machine Learning - Special issue on inductive transfer
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
A Theory of Program Size Formally Identical to Information Theory
Journal of the ACM (JACM)
Efficient Bayesian parameter estimation in large discrete domains
Proceedings of the 1998 conference on Advances in neural information processing systems II
Genetic Programming III: Darwinian Invention & Problem Solving
Genetic Programming III: Darwinian Invention & Problem Solving
Machine Learning
A perspective view and survey of meta-learning
Artificial Intelligence Review
Discriminability-Based Transfer between Neural Networks
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Optimality of universal Bayesian sequence prediction for general loss and alphabet
The Journal of Machine Learning Research
Information Theory, Inference & Learning Algorithms
Information Theory, Inference & Learning Algorithms
Universal Artificial Intelligence: Sequential Decisions Based On Algorithmic Probability
Universal Artificial Intelligence: Sequential Decisions Based On Algorithmic Probability
Monte Carlo Statistical Methods (Springer Texts in Statistics)
Monte Carlo Statistical Methods (Springer Texts in Statistics)
Estimating relatedness via data compression
ICML '06 Proceedings of the 23rd international conference on Machine learning
Cross-domain transfer for reinforcement learning
Proceedings of the 24th international conference on Machine learning
Universal transfer learning
Universal transfer learning
Cross-domain knowledge transfer using structured representations
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Mapping and revising Markov logic networks for transfer learning
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
A model of inductive bias learning
Journal of Artificial Intelligence Research
IEEE Transactions on Information Theory
Complexity-based induction systems: Comparisons and convergence theorems
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Causal inference using the algorithmic Markov condition
IEEE Transactions on Information Theory
Hi-index | 5.29 |
In transfer learning the aim is to solve new learning tasks using fewer examples by using information gained from solving related tasks. Existing transfer learning methods have been used successfully in practice and PAC analysis of these methods have been developed. But the key notion of relatedness between tasks has not yet been defined clearly, which makes it difficult to understand, let alone answer, questions that naturally arise in the context of transfer, such as, how much information to transfer, whether to transfer information, and how to transfer information across tasks. In this paper, we look at transfer learning from the perspective of Algorithmic Information Theory/Kolmogorov complexity theory, and formally solve these problems in the same sense Solomonoff Induction solves the problem of inductive inference. We define universal measures of relatedness between tasks, and use these measures to develop universally optimal Bayesian transfer learning methods. We also derive results in AIT that are interesting by themselves. To address a concern that arises from the theory, we also briefly look at the notion of Kolmogorov complexity of probability measures. Finally, we present a simple practical approximation to the theory to do transfer learning and show that even these are quite effective, allowing us to transfer across tasks that are superficially unrelated. The latter is an experimental feat which has not been achieved before, and thus shows the theory is also useful in constructing practical transfer algorithms.