Shared shape spaces

  • Authors:
  • Victor Adrian Prisacariu;Ian Reid

  • Affiliations:
  • University of Oxford, UK;University of Oxford, UK

  • Venue:
  • ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a method for simultaneous shape-constrained segmentation and parameter recovery. The parameters can describe anything from 3D shape to 3D pose and we place no restriction on the topology of the shapes, i.e. they can have holes or be made of multiple parts. We use Shared Gaussian Process Latent Variable Models to learn multimodal shape-parameter spaces. These allow non-linear embeddings of the high-dimensional shape and parameter spaces in low dimensional spaces in a fully probabilistic manner. We propose a method for exploring the multimodality in the joint space in an efficient manner, by learning a mapping from the latent space to a space that encodes the similarity between shapes. We further extend the SGP-LVM to a model that makes use of a hierarchy of embeddings and show that this yields faster convergence and greater accuracy over the standard non-hierarchical embedding. Shapes are represented implicitly using level sets, and inference is made tractable by compressing the level set embedding functions with discrete cosine transforms. We show state of the art results in various fields, ranging from pose recovery to gaze tracking and to monocular 3D reconstruction.