Vision-based contingency detection

  • Authors:
  • Jinhan Lee;Jeffrey F. Kiser;Aaron F. Bobick;Andrea L. Thomaz

  • Affiliations:
  • Georgia Institute of Technology, Atlanta, GA, USA;Georgia Institute of Technology, Atlanta, GA, USA;Georgia Institute of Technology, Atlanta, GA, USA;Georgia Institute of Technology, Atlanta, GA, USA

  • Venue:
  • Proceedings of the 6th international conference on Human-robot interaction
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a novel method for the visual detection of a contingent response by a human to the stimulus of a robot action. Contingency is defined as a change in an agent's behavior within a specific time window in direct response to a signal from another agent; detection of such responses is essential to assess the willingness and interest of a human in interacting with the robot. Using motion-based features to describe the possible contingent action, our approach assesses the visual self-similarity of video subsequences captured before the robot exhibits its signaling behavior and statistically models the typical graph-partitioning cost of separating an arbitrary subsequence of frames from the others. After the behavioral signal, the video is similarly analyzed and the cost of separating the after-signal frames from the before-signal sequences is computed; a lower than typical cost indicates likely contingent reaction. We present a preliminary study in which data were captured and analyzed for algorithmic performance.