Machine learning of inductive bias
Machine learning of inductive bias
A stochastic version of the delta rule
CNLS '89 Proceedings of the ninth annual international conference of the Center for Nonlinear Studies on Self-organizing, Collective, and Cooperative Phenomena in Natural and Artificial Computing Networks on Emergent computation
Introduction to the theory of neural computation
Introduction to the theory of neural computation
The cascade-correlation learning architecture
Advances in neural information processing systems 2
Neural Computation
Learning internal representations
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Machine Learning - Special issue on inductive transfer
Learning to learn
Machine Learning
Learning Sequential Tasks by Incrementally Adding Higher Orders
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Discriminability-Based Transfer between Neural Networks
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Explanation-Based Neural Network Learning for Robot Control
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Rule-Injection Hints as a Means of Improving Network Performance and Learning Time
Proceedings of the EURASIP Workshop 1990 on Neural Networks
The Task Rehearsal Method of Life-Long Learning: Overcoming Impoverished Data
AI '02 Proceedings of the 15th Conference of the Canadian Society for Computational Studies of Intelligence on Advances in Artificial Intelligence
Selective transfer of neural network task knowledge
Selective transfer of neural network task knowledge
Training neural networks with additive noise in the desired signal
IEEE Transactions on Neural Networks
Requirements for Machine Lifelong Learning
IWINAC '07 Proceedings of the 2nd international work-conference on The Interplay Between Natural and Artificial Computation, Part I: Bio-inspired Modeling of Cognitive Tasks
Inductive transfer with context-sensitive neural networks
Machine Learning
Hi-index | 0.00 |
The selective transfer of task knowledge within the context of artificial neural networks is studied using a modified version of ηMTL (multiple task learning) previously reported. sMTL is a knowledge based inductive learning system that uses prior task knowledge and stochastic noise to adjust its inductive bias when learning a new task. The MTL representation of previously learned and consolidated tasks is used as the starting point for learning a new primary task. Task rehearsal ensures the stability of related secondary task knowledge within the sMTL network and stochastic noise is used to create plasticity in the network so as to allow the new task to be learned. sMTL controls the level of noise to each secondary task based on a measure of secondary to primary task relatedness. Experiments demonstrate that from impoverished training sets, sMTL uses the prior representations to quickly develop predictive models that have (1) superior generalization ability compared with models produced by single task learning or standard MTL and (2) equivalent generalization ability compared with models produced by ηMTL.