A New Sense for Depth of Field
IEEE Transactions on Pattern Analysis and Machine Intelligence
Linear least squares computations
Linear least squares computations
Integrated natural spoken dialogue system of Jijo-2 mobile robot for office services
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
Skin-Color Modeling and Adaptation
ACCV '98 Proceedings of the Third Asian Conference on Computer Vision-Volume II
WACV '96 Proceedings of the 3rd IEEE Workshop on Applications of Computer Vision (WACV '96)
Human-style interaction with a robot for cooperative learning of scene objects
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Human to robot demonstrations of routine home tasks: exploring the role of the robot's feedback
Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction
Mobile Robots for an E-Mail Interface for People Who Are Blind
RoboCup 2006: Robot Soccer World Cup X
Knowledge acquisition through human---robot multimodal interaction
Intelligent Service Robotics
Hi-index | 0.00 |
In robotics, the idea of human and robot interaction is receiving a lot of attention lately. In this paper, we describe a multi-modal system for generating a map of the environment through interaction of a human and home robot. This system enables people to teach a newcomer robot different attributes of objects and places in the room through speech commands and hand gestures. The robot learns about size, position, and topological relations between objects, and produces a map of the room based on knowledge learned through communication with the human. The developed system consists of several sections including: natural language processing, posture recognition, object localization and map generation. This system combines multiple sources of information and model matching to detect and track a human hand so that the user can point toward an object of interest and guide the robot to either go near it or to locate that object's position in the room. The positions of objects in the room are located by monocular camera vision and depth from focus method.