NETtalk: a parallel network that learns to read aloud
Neurocomputing: foundations of research
Watch what I do: programming by demonstration
Watch what I do: programming by demonstration
CHILD: A First Step Towards Continual Learning
Machine Learning - Special issue on inductive transfer
Neural Computation
A learning algorithm for continually running fully recurrent neural networks
Neural Computation
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Short term memories and forcing the re-use of knowledge for generalization
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
Hi-index | 0.00 |
Although necessary, learning to discover new solutions is often long and difficult, even for supposedly simple tasks such as counting. On the other hand, learning by imitation provides a simple way to acquire knowledge by watching other agents do. In order to learn more complex tasks by imitation than mere sequences of actions, a Think Aloud protocol is introduced, with a new neuro-symbolic network. The latter uses time in the same way as in a Time Delay Neural Network, and is added basic first order logic capacities. Tested on a benchmark counting task, learning is very fast, generalization is accurate, whereas there is no initial bias toward counting.