Building a Multimodal Human-Robot Interface
IEEE Intelligent Systems
XWand: UI for intelligent spaces
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Catadioptric Omnidirectional Camera
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Pervasive Pose-Aware Applications and Infrastructure
IEEE Computer Graphics and Applications
Non-Intrusive Gaze Tracking Using Artificial Neural Networks
Non-Intrusive Gaze Tracking Using Artificial Neural Networks
Tracking of the Articulated Upper Body on Multi-View Stereo Image Sequences
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
Sketch and run: a stroke-based interface for home robots
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Designing Laser Gesture Interface for Robot Control
INTERACT '09 Proceedings of the 12th IFIP TC 13 International Conference on Human-Computer Interaction: Part II
A framework for compliant physical interaction
Autonomous Robots
Contact sensing and grasping performance of compliant hands
Autonomous Robots
Reactive grasping using optical proximity sensors
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
Simple inexpensive interface for robots using the Nintendo Wii controller
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Photograph-based interaction for teaching object delivery tasks to robots
Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction
Cooking with robots: designing a household system working in open environments
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
An integrated system for user-adaptive robotic grasping
IEEE Transactions on Robotics
Proceedings of the 6th international conference on Human-robot interaction
ClippingLight: a method for easy snapshots with projection viewfinder and tilt-based zoom control
Proceedings of the 2nd Augmented Human International Conference
Roboshop: multi-layered sketching interface for robot housework assignment and management
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Robotic object detection: learning to improve the classifiers using sparse graphs for path planning
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
PICOntrol: using a handheld projector for direct control of physical devices through visible light
Proceedings of the 25th annual ACM symposium on User interface software and technology
Knowledge acquisition through human---robot multimodal interaction
Intelligent Service Robotics
Self-adjusting focus of attention by means of GP for improving a laser point detection system
Proceedings of the 15th annual conference on Genetic and evolutionary computation
Shape recognition of laser beam trace for human-robot interface
Pattern Recognition Letters
Hi-index | 0.01 |
We present a novel interface for human-robot interaction that enables a human to intuitively and unambiguously select a 3D location in the world and communicate it to a mobile robot. The human points at a location of interest and illuminates it (``clicks it'') with an unaltered, off-the-shelf, green laser pointer. The robot detects the resulting laser spot with an omnidirectional, catadioptric camera with a narrow-band green filter. After detection, the robot moves its stereo pan/tilt camera to look at this location and estimates the location's 3D position with respect to the robot's frame of reference. Unlike previous approaches, this interface for gesture-based pointing requires no instrumentation of the environment, makes use of a non-instrumented everyday pointing device, has low spatial error out to 3 meters, is fully mobile, and is robust enough for use in real-world applications. We demonstrate that this human-robot interface enables a person to designate a wide variety of everyday objects placed throughout a room. In 99.4% of these tests, the robot successfully looked at the designated object and estimated its 3D position with low average error. We also show that this interface can support object acquisition by a mobile manipulator. For this application, the user selects an object to be picked up from the floor by ``clicking'' on it with the laser pointer interface. In 90% of these trials, the robot successfully moved to the designated object and picked it up off of the floor.