CNLS '89 Proceedings of the ninth annual international conference of the Center for Nonlinear Studies on Self-organizing, Collective, and Cooperative Phenomena in Natural and Artificial Computing Networks on Emergent computation
Intelligence without representation
Artificial Intelligence
Technical Note: \cal Q-Learning
Machine Learning
Acting optimally in partially observable stochastic domains
AAAI'94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 2)
Map learning with uninterpreted sensors and effectors
Artificial Intelligence
Representation and recognition in vision
Representation and recognition in vision
The Handbook of Brain Theory and Neural Networks
The Handbook of Brain Theory and Neural Networks
Machine Learning
ICML '05 Proceedings of the 22nd international conference on Machine learning
The initial development of object knowledge by a learning robot
Robotics and Autonomous Systems
Drinking from the firehose of experience
Artificial Intelligence in Medicine
Autonomous development of a grounded object ontology by a learning robot
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
Learning intialized by topologically correct representation
SMC'09 Proceedings of the 2009 IEEE international conference on Systems, Man and Cybernetics
Hi-index | 0.00 |
How can we build artificial agents that can autonomously explore and understand their environments? An immediate requirement for such an agent is to learn how its own sensory state corresponds to the external world properties: It needs to learn the semantics of its internal state (i.e., grounding). In principle, we as programmers can provide the agents with the required semantics, but this will compromise the autonomy of the agent. To overcome this problem, we may fall back on natural agents and see how they acquire meaning of their own sensory states, their neural firing patterns. We can learn a lot about what certain neural spikes mean by carefully controlling the input stimulus while observing how the neurons fire. However, neurons embedded in the brain do not have direct access to the outside stimuli, so such a stimulus-to-spike association may not be learnable at all. How then can the brain solve this problem? (We know it does.) We propose that motor interaction with the environment is necessary to overcome this conundrum. Further, we provide a simple yet powerful criterion, sensory invariance, for learning the meaning of sensory states. The basic idea is that a particular form of action sequence that maintains invariance of a sensory state will express the key property of the environmental stimulus that gave rise to the sensory state. Our experiments with a sensorimotor agent trained on natural images show that sensory invariance can indeed serve as a powerful objective for semantic grounding.