Communications of the ACM - Special issue on parallelism
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Instance-Based Learning Algorithms
Machine Learning
Symbolic and Neural Learning Algorithms: An Experimental Comparison
Machine Learning
Selecting typical instances in instance-based learning
ML92 Proceedings of the ninth international workshop on Machine learning
C4.5: programs for machine learning
C4.5: programs for machine learning
Case-based reasoning
The acquisition of stress: a data-oriented approach
Computational Linguistics - Special issue on computational phonology
Machine Learning
Artificial Intelligence Review - Special issue on lazy learning
IGTree: Using Trees for Compression and Classification in Lazy LearningAlgorithms
Artificial Intelligence Review - Special issue on lazy learning
Learning with Nested Generalized Exemplars
Learning with Nested Generalized Exemplars
Machine Learning
Machine Learning
Empirical Learning of Natural Language Processing Task
ECML '97 Proceedings of the 9th European Conference on Machine Learning
A study of distance-based machine learning algorithms
A study of distance-based machine learning algorithms
Advances in Instance Selection for Instance-Based Learning Algorithms
Data Mining and Knowledge Discovery
Man vs. machine: a case study in base noun phrase learning
ACL '99 Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics
A context sensitive maximum likelihood approach to chunking
ConLL '00 Proceedings of the 2nd workshop on Learning language in logic and the 4th conference on Computational natural language learning - Volume 7
SRWS '09 Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Student Research Workshop and Doctoral Consortium
Leveraging supplemental representations for sequential transduction
NAACL HLT '12 Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Hi-index | 0.00 |
Memory-based learning, keeping full memory of learning material, appears a viable approach to learning NLP tasks, and is often superior in generalisation accuracy to eager learning approaches that abstract from learning material. Here we investigate three partial memory-based learning approaches which remove from memory specific task instance types estimated to be exceptional. The three approaches each implement one heuristic function for estimating exceptionality of instance types: (i) typicality, (ii) class prediction strength, and (iii) friendly-neighbourhood size. Experiments are performed with the memory-based learning algorithm IB1-IG trained on English word pronunciation. We find that removing instance types with low prediction strength (ii) is the only tested method which does not seriously harm generalisation accuracy. We conclude that keeping full memory of types rather than tokens, and excluding minority ambiguities appear to be the only performance-preserving optimisations of memory-based learning.