Machine learning of inductive bias
Machine learning of inductive bias
Neural Computation
Learning internal representations
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Machine Learning - Special issue on inductive transfer
Learning to learn
Machine Learning
Selective transfer of neural network task knowledge
Selective transfer of neural network task knowledge
Requirements for Machine Lifelong Learning
IWINAC '07 Proceedings of the 2nd international work-conference on The Interplay Between Natural and Artificial Computation, Part I: Bio-inspired Modeling of Cognitive Tasks
Inductive transfer with context-sensitive neural networks
Machine Learning
A multitask learning model for online pattern recognition
IEEE Transactions on Neural Networks
Selective transfer of task knowledge using stochastic noise
AI'03 Proceedings of the 16th Canadian society for computational studies of intelligence conference on Advances in artificial intelligence
Image transformation: inductive transfer between multiple tasks having multiple outputs
Canadian AI'08 Proceedings of the Canadian Society for computational studies of intelligence, 21st conference on Advances in artificial intelligence
A neural network model for sequential multitask pattern recognition problems
ICONIP'08 Proceedings of the 15th international conference on Advances in neuro-information processing - Volume Part I
Consolidation using context-sensitive multiple task learning
Canadian AI'11 Proceedings of the 24th Canadian conference on Advances in artificial intelligence
Machine lifelong learning: challenges and benefits for artificial general intelligence
AGI'11 Proceedings of the 4th international conference on Artificial general intelligence
Hi-index | 0.00 |
The task rehearsal method (TRM) is introduced as an approach to life-long learning that uses the representation of previously learned tasks as a source of inductive bias. This inductive bias enables TRM to generate more accurate hypotheses for new tasks that have small sets of training examples. TRM has a knowledge retention phase during which the neural network representation of a successfully learned task is stored in a domain knowledge database, and a knowledge recall and learning phase during which virtual examples of stored tasks are generated from the domain knowledge. The virtual examples are rehearsed as secondary tasks in parallel with the learning of a new (primary) task using the 驴MTL neural network algorithm, a variant of multiple task learning (MTL). The results of experiments on three domains show that TRM is effective in retaining task knowledge in a representational form and transferring that knowledge in the form of virtual examples. TRM with 驴MTL is shown to develop more accurate hypotheses for tasks that suffer from impoverished training sets.