On learning the past tenses of English verbs
Parallel distributed processing: explorations in the microstructure of cognition, vol. 2
Connectionism and cognitive architecture: a critical analysis
Connections and symbols
A comparative study of ID3 and backpropagation for English text-to-speech mapping
Proceedings of the seventh international conference (1990) on Machine learning
Computer systems that learn: classification and prediction methods from statistics, neural nets, machine learning, and expert systems
Symbolic and Neural Learning Algorithms: An Experimental Comparison
Machine Learning
Neural networks and the bias/variance dilemma
Neural Computation
C4.5: programs for machine learning
C4.5: programs for machine learning
Machine Learning
Achievements and prospects of learning word morphology with inductive logic programming
Learning language in logic
Modeling Information in Textual Data Combining Labeled and Unlabeled Data
Proceedings of the ESF Exploratory Workshop on Pattern Detection and Discovery
ILP '99 Proceedings of the 9th International Workshop on Inductive Logic Programming
The segmentation problem in morphology learning
NeMLaP3/CoNLL '98 Proceedings of the Joint Conferences on New Methods in Language Processing and Computational Natural Language Learning
Research on Language and Computation
Identification of live news events using Twitter
Proceedings of the 3rd ACM SIGSPATIAL International Workshop on Location-Based Social Networks
Hi-index | 0.00 |
Learning the past tense of English verbs - a seemingly minor aspect of language acquisition - has generated heated debates since 1986, and has become a landmark task for testing the adequacy of cognitive modeling. Several artificial neural networks (ANNs) have been implemented, and a challenge for better symbolic models has been posed. In this paper, we present a general-purpose Symbolic Pattern Associator (SPA) based upon the decision-tree learning algorithm ID3. We conduct extensive head-to-head comparisons on the generalization ability between ANN models and the SPA under different representations. We conclude that the SPA generalizes the past tense of unseen verbs better than ANN models by a wide margin, and we offer insights as to why this should be the case. We also discuss a new default strategy for decision-tree learning algorithms.