A new lip feature representation method for video-based bimodal authentication

  • Authors:
  • Hua Ouyang;Tan Lee

  • Affiliations:
  • Department of Electronic Engineering, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong;Department of Electronic Engineering, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong

  • Venue:
  • MMUI '05 Proceedings of the 2005 NICTA-HCSNet Multimodal User Interaction Workshop - Volume 57
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

As the low-cost video transmission becomes popular, video based bimodal (audio and visual) authentication has great potential in various applications that require access control. It is especially useful for hand-held terminals, which are often used under adverse environments, where the signal quality is rather poor. When human voice is used for authentication, one of the most relevant visual features is the dynamic movement of lips. In this research, we investigate on the use of static and dynamic features of speaking lips in the context of voice based authentication. A new feature representation that preserves both appearance and motion pattern of speaking lips is proposed. The dimension of extracted features is reduced by multiple discriminant analysis (MDA) and the method of nearest neighbor is used for classification. Our method can achieve an identification rate of 98% with only lips features for 200 clients of the XM2VTS database. Experiments on speaker verification using fused audio and visual features are on-going.