Generating human-like motion for robots

  • Authors:
  • Michael J. Gielniak;C. Karen Liu;Andrea L. Thomaz

  • Affiliations:
  • College of Computing, Georgia Institute of Technology, Atlanta, GA, USA;College of Computing, Georgia Institute of Technology, Atlanta, GA, USA;College of Computing, Georgia Institute of Technology, Atlanta, GA, USA

  • Venue:
  • International Journal of Robotics Research
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Action prediction and fluidity are key elements of human-robot teamwork. If a robot's actions are hard to understand, it can impede fluid human-robot interaction. Our goal is to improve the clarity of robot motion by making it more human-like. We present an algorithm that autonomously synthesizes human-like variants of an input motion. Our approach is a three-stage pipeline. First we optimize motion with respect to spatiotemporal correspondence (STC), which emulates the coordinated effects of human joints that are connected by muscles. We present three experiments that validate that our STC optimization approach increases human-likeness and recognition accuracy for human social partners. Next in the pipeline, we avoid repetitive motion by adding variance, through exploiting redundant and underutilized spaces of the input motion, which creates multiple motions from a single input. In two experiments we validate that our variance approach maintains the human-likeness from the previous step, and that a social partner can still accurately recognize the motion's intent. As a final step, we maintain the robot's ability to interact with its world by providing it the ability to satisfy constraints. We provide experimental analysis of the effects of constraints on the synthesized human-like robot motion variants.