Viewpoint invariant 3D landmark model inference from monocular 2D images using higher-order priors

  • Authors:
  • Chaohui Wang; Yun Zeng;Loic Simon;Ioannis Kakadiaris;Dimitris Samaras;Nikos Paragios

  • Affiliations:
  • Center for Visual Computing, Ecole Centrale Paris, Châtenay-Malabry, France;Department of Computer Science, Stony Brook University, NY, USA;Center for Visual Computing, Ecole Centrale Paris, Châtenay-Malabry, France;Computational Biomedicine Lab, University of Houston, TX, USA;Department of Computer Science, Stony Brook University, NY, USA;Center for Visual Computing, Ecole Centrale Paris, Châtenay-Malabry, France

  • Venue:
  • ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a novel one-shot optimization approach to simultaneously determine both the optimal 3D landmark model and the corresponding 2D projections without explicit estimation of the camera viewpoint, which is also able to deal with misdetections as well as partial occlusions. To this end, a 3D shape manifold is built upon fourth-order interactions of landmarks from a training set where pose-invariant statistics are obtained in this space. The 3D-2D consistency is also encoded in such high-order interactions, which eliminate the necessity of viewpoint estimation. Furthermore, the modeling of visibility improves further the performance of the method by handling missing correspondences and occlusions. The inference is addressed through a MAP formulation which is naturally transformed into a higher-order MRF optimization problem and is solved using a dual-decomposition-based method. Promising results on standard face benchmarks demonstrate the potential of our approach.