An introduction to automata theory
An introduction to automata theory
Distributed Representations, Simple Recurrent Networks, And Grammatical Structure
Machine Learning - Connectionist approaches to language learning
The computational brain
Language as a dynamical system
Mind as motion
Extraction of rules from discrete-time recurrent neural networks
Neural Networks
Survey of the state of the art in human language technology
Hybrid Intelligent Systems
Introduction to Automata Theory, Languages and Computability
Introduction to Automata Theory, Languages and Computability
Switching and Finite Automata Theory: Computer Science Series
Switching and Finite Automata Theory: Computer Science Series
Connectionist, Statistical, and Symbolic Approaches to Learning for Natural Language Processing
Connectionist, Statistical, and Symbolic Approaches to Learning for Natural Language Processing
Preference Moore Machines for Neural Fuzzy Integration
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
A theory of grammatical induction in the connectionist paradigm
A theory of grammatical induction in the connectionist paradigm
Extracting comprehensible models from trained neural networks
Extracting comprehensible models from trained neural networks
Learning dialog act processing
COLING '96 Proceedings of the 16th conference on Computational linguistics - Volume 2
Language As a Cognitive Process: Syntax
Language As a Cognitive Process: Syntax
Journal of Artificial Intelligence Research
Learning distributed representations for the classification of terms
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 1
On the computational power of Elman-style recurrent networks
IEEE Transactions on Neural Networks
Hybrid preference machines based on inspiration from neuroscience
Cognitive Systems Research
Representing and Reasoning About XML with Ontologies
Applied Intelligence
Hi-index | 0.00 |
Previously neural networks have shown interesting performanceresults for tasks such as classification, but they stillsuffer from an insufficient focus on the structure of theknowledge represented therein. In this paper, we analyzevarious knowledge extraction techniques in detail and we develop newtransducer extraction techniques for the interpretation of recurrentneural network learning. First, we provide an overview of differentpossibilities to express structured knowledge using neuralnetworks. Then, we analyze a type of recurrent networkrigorously, applying a broad range of different techniques.We argue that analysis techniques, such asweight analysis using Hinton diagrams, hierarchical cluster analysis, andprincipal component analysis may be useful for providing certain views onthe underlying knowledge. However, we demonstrate that these techniques aretoo static and too low-level for interpretingrecurrent network classifications. The contribution ofthis paper is a particularly broad analysis of knowledge extraction techniques. Furthermore, we propose dynamic learning analysisand transducer extraction as two new dynamic interpretation techniques. Dynamiclearning analysis provides a better understanding ofhow the network learns, while transducer extractionprovides a better understanding of what thenetwork represents.