Sparse Representation Shape Models

  • Authors:
  • Yuelong Li;Jufu Feng;Li Meng;Jigang Wu

  • Affiliations:
  • School of Computer Science and Software Engineering, Tianjin Polytechnic University, Tianjin, P.R. China;Key Laboratory of Machine Perception (MOE), School of Electronics Engineering and Computer Science, Peking University, Beijing, P.R. China;Automobile Transport Command Department, Military Transportation University, Tianjin, P.R. China;School of Computer Science and Software Engineering, Tianjin Polytechnic University, Tianjin, P.R. China

  • Venue:
  • Journal of Mathematical Imaging and Vision
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

It is well-known that, during shape extraction, enrolling an appropriate shape constraint model could effectively improve locating accuracy. In this paper, a novel deformable shape model, Sparse Representation Shape Models (SRSM), is introduced. Rather than following commonly utilized statistical shape constraints, our model constrains shape appearance based on a morphological structure, the convex hull of aligned training samples, i.e., only shapes that could be linearly represented by aligned training samples with the sum of coefficients equal to one, are defined as qualified. This restriction strictly controls shape deformation modes to reduce extraction errors and prevent extremely poor outputs. This model is realized based on sparse representation, which ensures during shape regularization the maximum valuable shape information could be reserved. Besides, SRSM is interpretable and hence helpful to further understanding applications, such as face pose recognition. The effectiveness of SRSM is verified on two publicly available face image datasets, the FGNET and the FERET.