Image morphing: transfer learning between tasks that have multiple outputs

  • Authors:
  • Daniel L. Silver;Liangliang Tu

  • Affiliations:
  • Jodrey School of Computer Science, Acadia University, Wolfville, NS, Canada;Jodrey School of Computer Science, Acadia University, Wolfville, NS, Canada

  • Venue:
  • Canadian AI'12 Proceedings of the 25th Canadian conference on Advances in Artificial Intelligence
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Prior work has reported the benefit of transfer learning on domains of single output tasks for classification or prediction of a scalar. We investigate the use of transfer learning on a domain of tasks where each task has multiple outputs (ie. output is a vector). Multiple Task Learning (MTL) and Context-sensitive Multiple Task Learning (csMTL) neural networks are considered for a domain of image transformation tasks. Models are developed to transform images of neutral human faces to that of corresponding images of angry, happy and sad faces. The MTL approach proves problematic because the size of the network grows as a multiplicative function of the number of outputs and number of tasks. Empirical results show that csMTL neural networks are capable of developing superior models to single task learning models when beneficial transfer occurs from one or more secondary tasks.