Speech recognition by machines and humans
Speech Communication
Understanding intelligence
Approaches to Phoneme-Based Topic Spotting: An Experimental Comparison
ICASSP '97 Proceedings of the 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '97)-Volume 3 - Volume 3
Incremental learning with partial instance memory
Artificial Intelligence
Non-negative Matrix Factorization with Sparseness Constraints
The Journal of Machine Learning Research
Vision: A Computational Investigation into the Human Representation and Processing of Visual Information
Editorial note: Bridging the gap between human and automatic speech recognition
Speech Communication
On Cognitive Informatics Foundations of Knowledge and Formal Knowledge Systems
COGINF '07 Proceedings of the 6th IEEE International Conference on Cognitive Informatics
Unsupervised Pattern Discovery in Speech
IEEE Transactions on Audio, Speech, and Language Processing
Learning meaningful units from multimodal input: the effect of interaction strategies
Proceedings of the 2nd Workshop on Child, Computer and Interaction
Hi-index | 0.00 |
In this paper, we discuss a computational model that is able to detect and build word-like representations on the basis of sensory input. The model is designed and tested with a further aim to investigate how infants may learn to communicate by means of spoken language. The computational model makes use of a memory, a perception module, and the concept of 'learning drive'. Learning takes place within a communicative loop between a 'caregiver' and the 'learner'. Experiments carried out on three European languages with different genetic background (Finnish, Swedish, and Dutch) show that a robust word representation can be learned in using less than 100 acoustic tokens (examples) of that word. The model is inspired by the memory structure that is assumed functional for human cognitive processing.