The society of mind
Journal of Experimental & Theoretical Artificial Intelligence
Active vision
The conscious mind: in search of a fundamental theory
The conscious mind: in search of a fundamental theory
Map learning with uninterpreted sensors and effectors
Artificial Intelligence
The spatial semantic hierarchy
Artificial Intelligence
Mind and Mechanism
Semantic Information Processing
Semantic Information Processing
Cybernetics: Or Control and Communication in Animal and the Machine
Cybernetics: Or Control and Communication in Animal and the Machine
Memory representations in natural tasks
Journal of Cognitive Neuroscience
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Consciousness: drinking from the firehose of experience
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 3
Autonomous development of a grounded object ontology by a learning robot
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
Pengi: an implementation of a theory of activity
AAAI'87 Proceedings of the sixth National conference on Artificial intelligence - Volume 1
Guest editorial: Artificial consciousness: Theoretical and practical issues
Artificial Intelligence in Medicine
Putting egocentric and allocentric into perspective
SC'10 Proceedings of the 7th international conference on Spatial cognition
Hi-index | 0.00 |
Objective: Computational concepts from robotics and computer vision hold great promise to account for major aspects of the phenomenon of consciousness, including philosophically problematical aspects such as the vividness of qualia, the first-person character of conscious experience, and the property of intentionality. Methods: We present a dynamical systems model describing human or robotic agents and their interaction with the environment. In order to cope with the enormous information content of the sensory stream, this model includes trackers for selected coherent spatio-temporal portions of the sensory input stream, and a self-constructed plausible coherent narrative describing the recent history of the agent's sensorimotor interaction with the world. Results: We describe how an agent can autonomously learn its own intentionality by constructing computational models of hypothetical entities in the external world. These models explain regularities in the sensorimotor interaction, and serve as referents for the agent's symbolic knowledge representation. The high information content of the sensory stream allows the agent to continually evaluate these hypothesized models, refuting those that make poor predictions. The high information content of the sensory input stream also accounts for the vividness and uniqueness of subjective experience. We then evaluate our account against 11 features of consciousness ''that any philosophical-scientific theory should hope to explain'', according to the philosopher and prominent AI critic John Searle. Conclusion: The essential features of consciousness can, in principle, be implemented on a robot with sufficient computational power and a sufficiently rich sensorimotor system, embodied and embedded in its environment.