Dynamic bayesian networks for audio-visual speaker recognition

  • Authors:
  • Dongdong Li;Yingchun Yang;Zhaohui Wu

  • Affiliations:
  • Department of Computer Science and Technology, Zhejiang University, Hangzhou, P.R. China;Department of Computer Science and Technology, Zhejiang University, Hangzhou, P.R. China;Department of Computer Science and Technology, Zhejiang University, Hangzhou, P.R. China

  • Venue:
  • ICB'06 Proceedings of the 2006 international conference on Advances in Biometrics
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Audio-Visual speaker recognition promises higher performance than any single modal biometric systems. This paper further improves the novel approach based on Dynamic Bayesian Networks (DBNs) to bimodal speaker recognition. In the present paper, we investigate five different topologies of feature-level fusion framework using DBNs. We demonstrate that the performance of multimodal systems can be further improved by modeling the correlation of between the speech features and the face features appropriately. The experiment conducted on a multi-modal database of 54 users indicates promising results, with an absolute improvement of about 7.44% in the best case and 3.13% in the worst case compared with single modal speaker recognition system.