Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
How may I serve you?: a robot companion approaching a seated person in a helping context
Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction
Natural person-following behavior for social robots
Proceedings of the ACM/IEEE international conference on Human-robot interaction
Humanoid robots as a passive-social medium: a field experiment at a train station
Proceedings of the ACM/IEEE international conference on Human-robot interaction
Museum guide robot based on sociological interaction analysis
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
How close?: model of proximity control for information-presenting robots
Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction
A Human Aware Mobile Robot Motion Planner
IEEE Transactions on Robotics
Space speaks: towards socially and personality aware visual surveillance
Proceedings of the 1st ACM international workshop on Multimodal pervasive video analysis
From body space to interaction space: modeling spatial cooperation for virtual humans
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
Attitude towards robots depends on interaction but not on anticipatory behaviour
ICSR'11 Proceedings of the Third international conference on Social Robotics
Robotic system controlling target human's attention
ICIC'12 Proceedings of the 8th international conference on Intelligent Computing Theories and Applications
Designing engagement-aware agents for multiparty conversations
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Hi-index | 0.00 |
In this paper, we report a model that allows a robot to appropriately control its position as it presents information to a user. This capability is indispensable, since in the future, many robots will function in daily situations such as shopkeepers presenting products to customers or museum guides presenting information to visitors. Psychology research suggests that people adjust their positions to establish a joint view toward a target object. Similarly, when a robot presents an object, it should stand at an appropriate position that considers the positions of both the listener and the object to optimize the listener's field of view and establish a joint view. We observed human-human interaction situations, where people presented objects, and developed a model for an information-presenting robot to appropriately adjust its position. Our model consists of four constraints to establish O-space: 1) proximity to listener; 2) proximity to object; 3) listener's field of view; and 4) presenter's field of view.We also experimentally evaluate the effectiveness of our model.