Retargetting motion to new characters
Proceedings of the 25th annual conference on Computer graphics and interactive techniques
Physically based motion transformation
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
Morphable Models for the Analysis and Synthesis of Complex Motion Patterns
International Journal of Computer Vision - special issue on learning and vision at the center for biological and computational learning, Massachusetts Institute of Technology
A survey of computer vision-based human motion capture
Computer Vision and Image Understanding - Modeling people toward vision-based underatanding of a person's shape, appearance, and movement
Synthesis of complex dynamic character motion from simple animations
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Interactive control of avatars animated with human motion data
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Unsupervised Learning of Human Motion
IEEE Transactions on Pattern Analysis and Machine Intelligence
Perceptual metrics for character animation: sensitivity to errors in ballistic motion
ACM SIGGRAPH 2003 Papers
Combining Sampling and Autoregression for Motion Synthesis
CGI '04 Proceedings of the Computer Graphics International
Synthesizing animations of human manipulation tasks
ACM SIGGRAPH 2004 Papers
A data-driven approach to quantifying natural human motion
ACM SIGGRAPH 2005 Papers
Hi-index | 0.00 |
Action prediction and fluidity are key elements of human-robot teamwork. If a robot's actions are hard to understand, it can impede fluid human-robot interaction. Our goal is to improve the clarity of robot motion by making it more human-like. We present an algorithm that autonomously synthesizes human-like variants of an input motion. Our approach is a three-stage pipeline. First we optimize motion with respect to spatiotemporal correspondence (STC), which emulates the coordinated effects of human joints that are connected by muscles. We present three experiments that validate that our STC optimization approach increases human-likeness and recognition accuracy for human social partners. Next in the pipeline, we avoid repetitive motion by adding variance, through exploiting redundant and underutilized spaces of the input motion, which creates multiple motions from a single input. In two experiments we validate that our variance approach maintains the human-likeness from the previous step, and that a social partner can still accurately recognize the motion's intent. As a final step, we maintain the robot's ability to interact with its world by providing it the ability to satisfy constraints. We provide experimental analysis of the effects of constraints on the synthesized human-like robot motion variants.