Artificial intelligence and mobile robots
Recognizing and interpreting gestures on a mobile robot
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 2
A computer vision based human-robot interface
Autonomous robotic systems
Teaching and Working with Robots as a Collaboration
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 3
Robot's play: interactive games with sociable machines
Computers in Entertainment (CIE) - Theoretical and Practical Computer Applications in Entertainment
Qualitative spatial referencing for natural human-robot interfaces
interactions - Robots!
Robot's play: interactive games with sociable machines
Proceedings of the 2004 ACM SIGCHI International Conference on Advances in computer entertainment technology
Working with robots and objects: revisiting deictic reference for achieving spatial common ground
Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction
Where you point is where the robot is
CHINZ '06 Proceedings of the 7th ACM SIGCHI New Zealand chapter's international conference on Computer-human interaction: design centered HCI
Fusion of children's speech and 2D gestures when conversing with 3D characters
Signal Processing - Special section: Multimodal human-computer interfaces
Interactive robot task training through dialog and demonstration
Proceedings of the ACM/IEEE international conference on Human-robot interaction
Robotic smart house to assist people with movement disabilities
Autonomous Robots
Using a hand-drawn sketch to control a team of robots
Autonomous Robots
Human-robot interaction: a survey
Foundations and Trends in Human-Computer Interaction
A point-and-click interface for the real world: laser designation of objects for mobile manipulation
Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction
Three dimensional tangible user interface for controlling a robotic team
Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction
Reliable Evaluation of Multimodal Dialogue Systems
Proceedings of the 13th International Conference on Human-Computer Interaction. Part II: Novel Interaction Methods and Techniques
Designing Laser Gesture Interface for Robot Control
INTERACT '09 Proceedings of the 12th IFIP TC 13 International Conference on Human-Computer Interaction: Part II
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 4
Multimodal interaction with an autonomous forklift
Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction
Design and validation of two-handed multi-touch tabletop controllers for robot teleoperation
Proceedings of the 16th international conference on Intelligent user interfaces
Head pose estimation using stereo vision for human-robot interaction
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
Intelligent user interface for cleaning robots based on optimal user experience
Proceedings of the 5th International Conference on Ubiquitous Information Management and Communication
Roboshop: multi-layered sketching interface for robot housework assignment and management
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Artificial Life and Robotics
Knowledge acquisition through human---robot multimodal interaction
Intelligent Service Robotics
Involuntary expression of embodied robot adopting goose bumps
Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction
Hi-index | 0.00 |
When we begin to build and interact with machines or robots that either look like humans or have human functionalities and capabilities, then people may well interact with their human-like machines in ways that mimic human-human communication. For example, if a robot has a face, a human might interact with it similarly to how humans interact with other creatures with faces, Specifically, a human might talk to it, gesture to it, smile at it, and so on. If a human interacts with a computer or a machine that understands spoken commands, the human might converse with the machine, expecting it to have competence in spoken language. In our research on a multimodal interface to mobile robots, we have assumed a model of communication and interaction that, in a sense, mimics how people communicate. Our interface therefore incorporates both natural language understanding and gesture recognition as communication modes. We limited the interface to these two modes to simplify integrating them in the interface and to make our research more tractable. We believe that with an integrated system, the user is less concerned with how to communicate (which interactive mode to employ for a task), and is therefore free to concentrate on the tasks and goals at hand. Because we integrate all our system's components, users can choose any combination of our interface's modalities. The onus is on our interface to integrate the input, process it, and produce the desired results.