Selective transfer of task knowledge using stochastic noise

  • Authors:
  • Daniel L. Silver;Peter McCracken

  • Affiliations:
  • Intelligent Information Technology Research Laboratory, Jodrey School of Computer Science, Acadia University, Wolfville, Nova Scotia, Canada;Intelligent Information Technology Research Laboratory, Jodrey School of Computer Science, Acadia University, Wolfville, Nova Scotia, Canada

  • Venue:
  • AI'03 Proceedings of the 16th Canadian society for computational studies of intelligence conference on Advances in artificial intelligence
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

The selective transfer of task knowledge within the context of artificial neural networks is studied using a modified version of ηMTL (multiple task learning) previously reported. sMTL is a knowledge based inductive learning system that uses prior task knowledge and stochastic noise to adjust its inductive bias when learning a new task. The MTL representation of previously learned and consolidated tasks is used as the starting point for learning a new primary task. Task rehearsal ensures the stability of related secondary task knowledge within the sMTL network and stochastic noise is used to create plasticity in the network so as to allow the new task to be learned. sMTL controls the level of noise to each secondary task based on a measure of secondary to primary task relatedness. Experiments demonstrate that from impoverished training sets, sMTL uses the prior representations to quickly develop predictive models that have (1) superior generalization ability compared with models produced by single task learning or standard MTL and (2) equivalent generalization ability compared with models produced by ηMTL.