A New Way to Represent the Relative Position between Areal Objects
IEEE Transactions on Pattern Analysis and Machine Intelligence
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Spatial language for human-robot dialogs
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Linguistic description of relative positions in images
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Robot-directed speech: using language to assess first-time users' conceptualizations of a robot
Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction
Hi-index | 0.00 |
One of the key components for natural interaction between humans and robots is the ability to understand the spatial relationships that exist in the natural world. Previous research has shown that modeling the 2D spatial relationships of FRONT, BEHIND, LEFT, RIGHT, and BETWEEN can be accomplished with results consistent with that of a human being. Upcoming research will involve a human subject study to investigate the use of spatial relationships in 3D space. This will be the first step in extending previous research of the 2D spatial relations into a 3D representation through the use of 3D object point clouds generated by the SIFT algorithm and stereo vision. This will allow for the enrichment of our human-robot dialog to include phrases such as "Bring me the coffee cup on top of the desk and to the right of the computer.