SOAR: an architecture for general intelligence
Artificial Intelligence
Unified theories of cognition
Computer facial animation
Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Theory of Mind for a Humanoid Robot
Autonomous Robots
Where to look: a study of human-robot engagement
Proceedings of the 9th international conference on Intelligent user interfaces
Teaching and Working with Robots as a Collaboration
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 3
Identifying the addressee in human-human-robot interactions based on head pose and speech
Proceedings of the 6th international conference on Multimodal interfaces
Conversing with the user based on eye-gaze patterns
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Analyzing and predicting focus of attention in remote collaborative tasks
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Proceedings of the 2005 joint conference on Smart objects and ambient intelligence: innovative context-aware services: usages and technologies
Children and robots learning to play hide and seek
Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction
Proceedings of the 8th international conference on Multimodal interfaces
Using vision, acoustics, and natural language for disambiguation
Proceedings of the ACM/IEEE international conference on Human-robot interaction
Improving human-robot interaction through adaptation to the auditory scene
Proceedings of the ACM/IEEE international conference on Human-robot interaction
Spatial representation and reasoning for human-robot collaboration
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
Enabling effective human-robot interaction using perspective-taking in robots
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Footing in human-robot conversations: how robots might shape participant roles using gaze cues
Proceedings of the 4th ACM/IEEE international conference on Human robot interaction
Nonverbal leakage in robots: communication of intentions through seemingly unintentional behavior
Proceedings of the 4th ACM/IEEE international conference on Human robot interaction
Incorporating mental simulation for a more effective robotic teammate
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
Pointing to space: modeling of deictic interaction referring to regions
Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction
Conversational gaze mechanisms for humanlike robots
ACM Transactions on Interactive Intelligent Systems (TiiS)
MAWARI: a social interface to reduce the workload of the conversation
ICSR'11 Proceedings of the Third international conference on Social Robotics
Are you looking at me?: perception of robot attention is mediated by gaze type and group size
Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction
Hi-index | 0.00 |
We describe a computational cognitive architecture for robots which we call ACT-R/E (ACT-R/Embodied). ACT-R/E is based on ACT-R [1, 2] but uses different visual, auditory, and movement modules. We describe a model that uses ACT-R/E to integrate visual and auditory information to perform conversation tracking in a dynamic environment. We also performed an empirical evaluation study which shows that people see our conversational tracking system as extremely natural.