A domain-independent system for sketch recognition
Proceedings of the 1st international conference on Computer graphics and interactive techniques in Australasia and South East Asia
Building a Multimodal Human-Robot Interface
IEEE Intelligent Systems
Unification-based multimodal integration
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
Proceedings of the 6th international conference on Multimodal interfaces
Using a hand-drawn sketch to control a team of robots
Autonomous Robots
PaleoSketch: accurate primitive sketch recognition and beautification
Proceedings of the 13th international conference on Intelligent user interfaces
Speech and sketching: an empirical study of multimodal interaction
SBIM '07 Proceedings of the 4th Eurographics workshop on Sketch-based interfaces and modeling
Egocentric and exocentric teleoperation interface using real-time, 3D video projection
Proceedings of the 4th ACM/IEEE international conference on Human robot interaction
Sketch and run: a stroke-based interface for home robots
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Recognition of hand drawn chemical diagrams
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
Ecological Interfaces for Improving Mobile Robot Teleoperation
IEEE Transactions on Robotics
One-shot visual appearance learning for mobile manipulation
International Journal of Robotics Research
Hi-index | 0.00 |
We describe a multimodal framework for interacting with an autonomous robotic forklift. A key element enabling effective interaction is a wireless, handheld tablet with which a human supervisor can command the forklift using speech and sketch. Most current sketch interfaces treat the canvas as a blank slate. In contrast, our interface uses live and synthesized camera images from the forklift as a canvas, and augments them with object and obstacle information from the world. This connection enables users to "draw on the world," enabling a simpler set of sketched gestures. Our interface supports commands that include summoning the forklift and directing it to lift, transport, and place loads of palletized cargo. We describe an exploratory evaluation of the system designed to identify areas for detailed study. Our framework incorporates external signaling to interact with humans near the vehicle. The robot uses audible and visual annunciation to convey its current state and intended actions. The system also provides seamless autonomy handoff: any human can take control of the robot by entering its cabin, at which point the forklift can be operated manually until the human exits.