Image transformation: inductive transfer between multiple tasks having multiple outputs

  • Authors:
  • Daniel L. Silver;Liangliang Tu

  • Affiliations:
  • Jodrey School of Computer Science, Acadia University, Wolfville, NS, Canada;Jodrey School of Computer Science, Acadia University, Wolfville, NS, Canada

  • Venue:
  • Canadian AI'08 Proceedings of the Canadian Society for computational studies of intelligence, 21st conference on Advances in artificial intelligence
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Previous research has investigated inductive transfer for single output modeling problems such as classification or prediction of a scalar. Little research has been done in the area of inductive transfer applied to tasks with multiple outputs. We report the results of using Multiple Task Learning (MTL) neural networks and Context-sensitive Multiple Task Learning (csMTL) on a domain of image transformation tasks. Models are developed to transform synthetic images of neutral (passport) faces to that of corresponding images of angry, happy and sad faces. The results are inconclusive for MTL, however they demonstrate that inductive transfer with csMTL is beneficial. When the secondary tasks have sufficient numbers of training examples from which to provide transfer, csMTL models are able to transform images more accurately than standard single task learning models.