Implementing human questioning strategies into quizzing-robot

  • Authors:
  • Takaya Ohyama;Yasutomo Maeda;Chiaki Mori;Yoshinori Kobayashi;Yoshinori Kuno;Rio Fujita;Keiichi Yamazaki;Shun Miyazawa;Akiko Yamazaki;Keiko Ikeda

  • Affiliations:
  • Saitama University, Saitama City, Japan;Saitama University, Saitama City, Japan;Saitama University, Saitama City, Japan;Saitama University & Japan Science Technology Agency, Saitama City, Japan;Saitama University, Saitama City, Japan;Saitama University, Saitama City, Japan;Saitama University, Saitama City, Japan;Japan Science Technology Agency, Saitama City, Japan;Japan Science Technology Agency, Saitama City, Japan;Kansai University, Osaka, Japan

  • Venue:
  • HRI '12 Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

From our ethnographic studies on various kinds of museums, we discovered that guides routinely propose questions to visitors in order to draw their attention towards both his/her explanation and the exhibit. The guides' question sequences tend to begin with a pre-question which serves to not only monitor visitors' behavior and responses, but to also alert visitors that a primary question would follow. We implemented this questioning-strategy with our robot system and investigated whether this strategy would also work in human-robot interaction. We developed a vision system that enables the robot to choose an appropriate visitor by monitoring a visitor's response from the initiation of a pre-question to the following pause. Results indicate that this questioning-strategy works effectively in human-robot interaction. In this experiment, the robot asked visitors about a photograph. At the pre-question, the robot delivered a rather easy question followed by a more challenging question (Figure 1). More participants turned their head away from the exhibition when they were not sure about their answer to the question. They either faced away from the robot, or smiled wryly at the robot or at each other. These types of behaviors index participants' states of knowledge, which we could utilize to develop a system by which the robot could choose an appropriate candidate by computational recognition.