Adaptive integration of multiple cues for contingency detection

  • Authors:
  • Jinhan Lee;Crystal Chao;Andrea L. Thomaz;Aaron F. Bobick

  • Affiliations:
  • School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA;School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA;School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA;School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA

  • Venue:
  • HBU'11 Proceedings of the Second international conference on Human Behavior Unterstanding
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Critical to natural human-robot interaction is the capability of robots to detect the contingent reactions by humans. In various interaction scenarios, a robot can recognize a human's intention by detecting the presence or absence of a human response to its interactive signal. In our prior work [1], we addressed the problem of detecting visible reactions by developing a method of detecting changes in human behavior resulting from a robot signal. We extend the previous behavior change detector by integrating multiple cues using a mechanism that operates at two levels of information integration and then adaptively applying these cues based on their reliability. We propose a new method for evaluating reliability of cues online during interaction. We perform a data collection experiment with help of the Wizard-of-Oz methodology in a turn-taking scenario in which a humanoid robot plays the turn-taking imitation game “Simon says” with human partners. Using this dataset, which includes motion and body pose cues from a depth and color image, we evaluate our contingency detection module with the proposed integration mechanisms and show the importance of selecting the appropriate level of cue integration.