Planning and decision making in dynamic domains
Planning and decision making in dynamic domains
On the criteria to be used in decomposing systems into modules
Communications of the ACM
Semiotic schemas: a framework for grounding language in action and perception
Artificial Intelligence - Special volume on connecting language to the world
Semiotic Dynamics for Embodied Agents
IEEE Intelligent Systems
Socially guided machine learning
Socially guided machine learning
Using vision, acoustics, and natural language for disambiguation
Proceedings of the ACM/IEEE international conference on Human-robot interaction
Incremental natural language processing for HRI
Proceedings of the ACM/IEEE international conference on Human-robot interaction
Towards an integrated robot with multiple cognitive functions
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Information fusion for visual reference resolution in dynamic situated dialogue
PIT'06 Proceedings of the 2006 international tutorial and research conference on Perception and Interactive Technologies
Knowledge Processing Middleware
SIMPAR '08 Proceedings of the 1st International Conference on Simulation, Modeling, and Programming for Autonomous Robots
Planning as an architectural control mechanism
Proceedings of the 4th ACM/IEEE international conference on Human robot interaction
Salience-driven Contextual Priming of Speech Recognition for Human-Robot Interaction
Proceedings of the 2008 conference on ECAI 2008: 18th European Conference on Artificial Intelligence
Exploring design space for an integrated intelligent system
Knowledge-Based Systems
An integrated approach to robust processing of situated spoken dialogue
SRSL '09 Proceedings of the 2nd Workshop on Semantic Representation of Spoken Language
Situated resolution and generation of spatial referring expressions for robotic assistants
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Engineering intelligent information-processing systems with CAST
Advanced Engineering Informatics
A computer vision integration model for a multi-modal cognitive system
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Tutor-based learning of visual categories using different levels of supervision
Computer Vision and Image Understanding
A salience-driven approach to speech recognition for human-robot interaction
ESSLLI'08/09 Proceedings of the 2008 international conference on Interfaces: explorations in logic, language and computation
Binding and cross-modal learning in markov logic networks
ICANNGA'11 Proceedings of the 10th international conference on Adaptive and natural computing algorithms - Volume Part II
A robotic world model framework designed to facilitate human-robot communication
SIGDIAL '11 Proceedings of the SIGDIAL 2011 Conference
ART-based fusion of multi-modal perception for robots
Neurocomputing
Hi-index | 0.00 |
Operating in a physical context, an intelligent robot faces two fundamental problems. First, it needs to combine information from its different sensors to form a representation of the environment that is more complete than any representation a single sensor could provide. Second, it needs to combine high-level representations (such as those for planning and dialogue) with sensory information, to ensure that the interpretations of these symbolic representations are grounded in the situated context. Previous approaches to this problem have used techniques such as (low-level) information fusion, ontological reasoning, and (high-level) concept learning. This paper presents a framework in which these, and related approaches, can be used to form a shared representation of the current state of the robot in relation to its environment and other agents. Preliminary results from an implemented system are presented to illustrate how the framework supports behaviours commonly required of an intelligent robot.