Optimization by mean field annealing
Advances in neural information processing systems 1
CNLS '89 Proceedings of the ninth annual international conference of the Center for Nonlinear Studies on Self-organizing, Collective, and Cooperative Phenomena in Natural and Artificial Computing Networks on Emergent computation
Neural networks and intellect: using model-based concepts
Neural networks and intellect: using model-based concepts
Symbol grounding and the symbolic theft hypothesis
Simulating the evolution of language
Iterated learning: a framework for the emergence of language
Artificial Life
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 2
Course In General Linguistics
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
Integrating Language and Cognition: A Cognitive Robotics Approach
IEEE Computational Intelligence Magazine
Evolution of communication and language using signals, symbols, andwords
IEEE Transactions on Evolutionary Computation
Evolving Compositionality in Evolutionary Language Games
IEEE Transactions on Evolutionary Computation
2009 Special Issue: Language and cognition
Neural Networks
Emotions, language, and Sapir-Whorf hypothesis
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Journal of Global Information Management
Hi-index | 0.00 |
Cross-situational learning is based on the idea that a learner can determine the meaning of a word by finding something in common across all observed uses of that word. Although cross-situational learning is usually modeled through stochastic guessing games in which the input data vary erratically with time (or rounds of the game), here we investigate the possibility of applying the deterministic Neural Modeling Fields (NMF) categorization mechanism to infer the correct object-word mapping. Two different representations of the input data were considered. The first is termed object-word representation because it takes as inputs all possible object-word pairs and weighs them by their frequencies of occurrence in the stochastic guessing game. A re-interpretation of the problem within the perspective of learning with noise indicates that the cross-situational scenario produces a too low signal-to-noise ratio, explaining thus the failure of NMF to infer the correct object-word mapping. The second representation, termed context-word, takes as inputs all the objects that are in the pupil's visual field (context) when a word is uttered by the teacher. In this case we show that use of two levels of hierarchy of NMF allows the inference of the correct object-word mapping.