Invariant surface-based shape descriptor for dynamic surface encoding

  • Authors:
  • Tony Tung;Takashi Matsuyama

  • Affiliations:
  • Graduate School of Informatics, Kyoto University, Japan;Graduate School of Informatics, Kyoto University, Japan

  • Venue:
  • ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part I
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a novel approach to represent spatio-temporal visual information. We introduce a surface-based shape model whose structure is invariant to surface variations over time to describe 3D dynamic surfaces (e.g., obtained from multiview video capture). The descriptor is defined as a graph lying on object surfaces and anchored to invariant local features (e.g., extremal points). Geodesic-consistency-based priors are used as cues within a probabilistic framework to maintain the graph invariant, even though the surfaces undergo non-rigid deformations. Our contribution brings to 3D geometric data a temporally invariant structure that relies only on intrinsic surface properties, and is independent of surface parameterization (i.e., surface mesh connectivity). The proposed descriptor can therefore be used for efficient dynamic surface encoding, through transformation into 2D (geometry) images, as its structure can provide an invariant representation for 3D mesh models. Various experiments on challenging publicly available datasets are performed to assess invariant property and performance of the descriptor.