Information processing in dynamical systems: foundations of harmony theory
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Distributed Representations, Simple Recurrent Networks, And Grammatical Structure
Machine Learning - Connectionist approaches to language learning
The Induction of Dynamical Recognizers
Machine Learning - Connectionist approaches to language learning
Language as a dynamical system
Mind as motion
Natural Language Grammatical Inference with Recurrent Neural Networks
IEEE Transactions on Knowledge and Data Engineering
Learning Semantic Combinatoriality from the Interaction between Linguistic and Behavioral Processes
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Finite state automata and simple recurrent networks
Neural Computation
Self-organization of behavioral primitives as multiple attractor dynamics: A robot experiment
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Computational capabilities of local-feedback recurrent networks acting as finite-state machines
IEEE Transactions on Neural Networks
Integrative learning between language and action: a neuro-robotics experiment
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part II
Adaptive learning of linguistic hierarchy in a multiple timescale recurrent neural network
ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part I
Hi-index | 0.00 |
We show that a Multiple Timescale Recurrent Neural Network (MTRNN) can acquire the capabilities to recognize, generate, and correct sentences by self-organizing in a way that mirrors the hierarchical structure of sentences: characters grouped into words, and words into sentences. The model can control which sentence to generate depending on its initial states (generation phase) and the initial states can be calculated from the target sentence (recognition phase). In an experiment, we trained our model over a set of unannotated sentences from an artificial language, represented as sequences of characters. Once trained, the model could recognize and generate grammatical sentences, even if they were not learned. Moreover, we found that our model could correct a few substitution errors in a sentence, and the correction performance was improved by adding the errors to the training sentences in each training iteration with a certain probability. An analysis of the neural activations in our model revealed that the MTRNN had self-organized, reflecting the hierarchical linguistic structure by taking advantage of the differences in timescale among its neurons: in particular, neurons that change the fastest represented ''characters'', those that change more slowly, ''words'', and those that change the slowest, ''sentences''.