Learning internal representations
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Machine Learning - Special issue on inductive transfer
Empirical Bayes for Learning to Learn
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Selective transfer of neural network task knowledge
Selective transfer of neural network task knowledge
Task clustering and gating for bayesian multitask learning
The Journal of Machine Learning Research
Regularized multi--task learning
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Learning Multiple Tasks with Kernel Methods
The Journal of Machine Learning Research
Learning Gaussian processes from multiple tasks
ICML '05 Proceedings of the 22nd international conference on Machine learning
Brain Areas Specific for Attentional Load in a Motion-Tracking Task
Journal of Cognitive Neuroscience
Hi-index | 0.00 |
A formal definition of task relatedness to theoretically justify multi-task learning (MTL) improvements has remained quite elusive. The implementation of MTL using multi-layer perceptron (MLP) neural networks evoked the notion of related tasks sharing an underlying representation. This assumption of relatedness can sometimes hurt the training process if tasks are not truly related in that way. In this paper we present a novel single-layer perceptron (SLP) approach to selectively achieve knowledge transfer in a multi-tasking scenario by using a different notion of task relatedness. The experimental results show that the proposed scheme largely outperforms single-task learning (STL) using single layer perceptrons, working in a robust way even when not closely related tasks are present.