Semi-supervised facial landmark annotation

  • Authors:
  • Yan Tong;Xiaoming Liu;Frederick W. Wheeler;Peter H. Tu

  • Affiliations:
  • Department of Computer Science & Engineering, Univ. of South Carolina, Columbia, SC 29208, United States;Visualization and Computer Vision Lab., GE Global Research, Niskayuna, NY 12309, United States;Visualization and Computer Vision Lab., GE Global Research, Niskayuna, NY 12309, United States;Visualization and Computer Vision Lab., GE Global Research, Niskayuna, NY 12309, United States

  • Venue:
  • Computer Vision and Image Understanding
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Landmark annotation for training images is essential for many learning tasks in computer vision, such as object detection, tracking, and alignment. Image annotation is typically conducted manually, which is both labor-intensive and error-prone. To improve this process, this paper proposes a new approach to estimating the locations of a set of landmarks for a large image ensemble using manually annotated landmarks for only a small number of images in the ensemble. Our approach, named semi-supervised least-squares congealing, aims to minimize an objective function defined on both annotated and unannotated images. A shape model is learned online to constrain the landmark configuration. We employ an iterative coarse-to-fine patch-based scheme together with a greedy patch selection strategy for landmark location estimation. Extensive experiments on facial images show that our approach can reliably and accurately annotate landmarks for a large image ensemble starting with a small number of manually annotated images, under several challenging scenarios.