Multi-Modal Human Verification Using Face and Speech

  • Authors:
  • Changhan Park;Joonki Paik;Taewoong Choi;Soonhyob Kim;Youngouk Kim;Jaechan Namkung

  • Affiliations:
  • Graduate School of Advanced Imaging Science, Multimedia, and Film, Chung-Ang University, Korea;Graduate School of Advanced Imaging Science, Multimedia, and Film, Chung-Ang University, Korea;Kwangwoon University, Seoul , Korea;Kwangwoon University, Seoul , Korea;Korea Electronics Technology Institute, Korea;Kwangwoon University, Korea

  • Venue:
  • ICVS '06 Proceedings of the Fourth IEEE International Conference on Computer Vision Systems
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a personal verification method using both face and speech to improve the rate of single biometric verification. False acceptance rate (FAR) and false rejection rate (FRR) have been a fundamental bottleneck of real-time personal verification. The proposed multimodal biometric method is to improve both verification rate and reliability in real-time by overcoming technical limitations of single biometric verification methods. The proposed method uses principal component analysis (PCA) for face recognition and hidden markov model (HMM) for speech recognition. It also uses fuzzy logic for the final decision of personal verification. Based on experimental results, the proposed system can reduce FAR down to 0.0001%, which provides that the proposed method overcomes the limitation of single biometric system and provides stable personal verification in real-time.