Expression transfer for facial sketch animation

  • Authors:
  • Yang Yang;Nanning Zheng;Yuehu Liu;Shaoyi Du;Yuanqi Su;Yoshifumi Nishio

  • Affiliations:
  • The Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an 710049, China and The Department of Electrical and Electronic Engineering, The University of Tokushima, Toku ...;The Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an 710049, China;The Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an 710049, China;The Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an 710049, China;The Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an 710049, China;The Department of Electrical and Electronic Engineering, The University of Tokushima, Tokushima 770-8506, Japan

  • Venue:
  • Signal Processing
  • Year:
  • 2011

Quantified Score

Hi-index 0.08

Visualization

Abstract

This paper presents a hierarchical animation method for transferring facial expressions extracted from a performance video to different facial sketches. Without any expression example obtained from target faces, our approach can transfer expressions by motion retargetting to facial sketches. However, in practical applications, the image noise in each frame will reduce the feature extraction accuracy from source faces. And the shape difference between source and target faces will influence the animation quality for representing expressions. To solve these difficulties, we propose a robust neighbor-expression transfer (NET) model, which aims at modeling the spatial relations among sparse facial features. By learning expression behaviors from neighbor face examples, the NET model can reconstruct facial expressions from noisy signals. Based on the NET model, we present a hierarchical method to animate facial sketches. The motion vectors on the source face are adjusted from coarse to fine on the target face. Accordingly, the animation results are generated to replicate source expressions. Experimental results demonstrate that the proposed method can effectively and robustly transfer expressions by noisy animation signals.