Artificial intelligence: the very idea
Artificial intelligence: the very idea
Certain aspects of the anatomy and physiology of the cerebral cortex
Parallel distributed processing: explorations in the microstructure of cognition, vol. 2
The artificial intelligence debate: false starts, real foundations
The artificial intelligence debate: false starts, real foundations
Learning from hints in neural networks
Journal of Complexity
Neurocomputing
The appeal of parallel distributed processing
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Neural network learning and expert systems
Neural network learning and expert systems
The computational brain
What is cognitive science?
On the power of sigmoid neural networks
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
What is computational neuroscience?
Computational neuroscience
On the computational power of neural nets
Journal of Computer and System Sciences
Machine learning, neural and statistical classification
Machine learning, neural and statistical classification
Extraction of rules from discrete-time recurrent neural networks
Neural Networks
Neural networks and analog computation: beyond the Turing limit
Neural networks and analog computation: beyond the Turing limit
Architectures for Intelligence
Architectures for Intelligence
Matter and Consciousness
Pattern Recognition and Neural Networks
Pattern Recognition and Neural Networks
Introduction To Automata Theory, Languages, And Computation
Introduction To Automata Theory, Languages, And Computation
Strong Semantic Systematicity from Hebbian Connectionist Learning
Minds and Machines
Machine Learning
Rule-Injection Hints as a Means of Improving Network Performance and Learning Time
Proceedings of the EURASIP Workshop 1990 on Neural Networks
Concept acquisition through representational adjustment
Concept acquisition through representational adjustment
Computation: finite and infinite machines
Computation: finite and infinite machines
On the computational power of Elman-style recurrent networks
IEEE Transactions on Neural Networks
Spatial Cognition and Computation
Extracting linguistic quantitative rules from supervised neural networks
International Journal of Knowledge-based and Intelligent Engineering Systems
Hi-index | 0.00 |
This paper examines whether a classical model could be translated into a PDP network using a standard connectionist training technique called extra output learning. In Study 1, standard machine learning techniques were used to create a decision tree that could be used to classify 8124 different mushrooms as being edible or poisonous on the basis of 21 different Features (Schlimmer, 1987). In Study 2, extra output learning was used to insert this decision tree into a PDP network being trained on the identical problem. An interpretation of the trained network revealed a perfect mapping from its internal structure to the decision tree, representing a precise translation of the classical theory to the connectionist model. In Study 3, a second network was trained on the mushroom problem without using extra output learning. An interpretation of this second network revealed a different algorithm for solving the mushroom problem, demonstrating that the Study 2 network was indeed a proper theory translation.