Machine Learning - Special issue on inductive transfer
Automatic interpolation and recognition of face images by morphing
FG '96 Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition (FG '96)
Comprehensive Database for Facial Expression Analysis
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
ICETET '08 Proceedings of the 2008 First International Conference on Emerging Trends in Engineering and Technology
Inductive transfer with context-sensitive neural networks
Machine Learning
Image transformation: inductive transfer between multiple tasks having multiple outputs
Canadian AI'08 Proceedings of the Canadian Society for computational studies of intelligence, 21st conference on Advances in artificial intelligence
IEEE Transactions on Knowledge and Data Engineering
On the expressive power of deep architectures
ALT'11 Proceedings of the 22nd international conference on Algorithmic learning theory
Hi-index | 0.00 |
Prior work has reported the benefit of transfer learning on domains of single output tasks for classification or prediction of a scalar. We investigate the use of transfer learning on a domain of tasks where each task has multiple outputs (ie. output is a vector). Multiple Task Learning (MTL) and Context-sensitive Multiple Task Learning (csMTL) neural networks are considered for a domain of image transformation tasks. Models are developed to transform images of neutral human faces to that of corresponding images of angry, happy and sad faces. The MTL approach proves problematic because the size of the network grows as a multiplicative function of the number of outputs and number of tasks. Empirical results show that csMTL neural networks are capable of developing superior models to single task learning models when beneficial transfer occurs from one or more secondary tasks.