Sparsity sharing embedding for face verification

  • Authors:
  • Donghoon Lee;Hyunsin Park;Junyoung Chung;Youngook Song;Chang D. Yoo

  • Affiliations:
  • Department of Electrical Engineering, KAIST, Daejeon, Korea;Department of Electrical Engineering, KAIST, Daejeon, Korea;Department of Electrical Engineering, KAIST, Daejeon, Korea;Department of Electrical Engineering, KAIST, Daejeon, Korea;Department of Electrical Engineering, KAIST, Daejeon, Korea

  • Venue:
  • ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part II
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Face verification in an uncontrolled environment is a challenging task due to the possibility of large variations in pose, illumination, expression, occlusion, age, scale, and misalignment. To account for these intra-personal settings, this paper proposes a sparsity sharing embedding (SSE) method for face verification that takes into account a pair of input faces under different settings. The proposed SSE method measures the distance between two input faces ${\mathbf x}_A$ and ${\mathbf x}_B$ under intra-personal settings sA and sB in two steps: 1) in the association step, ${\mathbf x}_A$ and ${\mathbf x}_B$ is represented in terms of a reconstructive weight vector and identity under settings sA and sB, respectively, from the generic identity dataset; 2) in the prediction step, the associated faces are replaced by embedding vectors that conserve their identity but are embedded to preserve the inter-personal structures of the intra-personal settings. Experiments on a MultiPIE dataset show that the SSE method performs better than the AP model in terms of the verification rate.