Timing multimodal turn-taking for human-robot cooperation

  • Authors:
  • Crystal Chao

  • Affiliations:
  • Georgia Institute of Technology, Atlanta, GA, USA

  • Venue:
  • Proceedings of the 14th ACM international conference on Multimodal interaction
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

In human cooperation, the concurrent usage of multiple social modalities such as speech, gesture, and gaze results in robust and efficient communicative acts. Such multimodality in combination with reciprocal intentions supports fluent turn-taking. I hypothesize that human-robot turn-taking can be made more fluent through appropriate timing of multimodal actions. Managing timing includes understanding the impact that timing can have on interactions as well as having a control system that supports the manipulation of such timing. To this end, I propose to develop a computational turn-taking model of the timing and information flow of reciprocal interactions. I also propose to develop an architecture based on the timed Petri net (TPN) for the generation of coordinated multimodal behavior, inside of which the turn-taking model will regulate turn timing and action initiation and interruption in order to seize and yield control. Through user studies in multiple domains, I intend to demonstrate the system's generality and evaluate the system on balance of control, fluency, and task effectiveness.