Speaker localisation using audio-visual synchrony: an empirical study

  • Authors:
  • Harriet J. Nock;Giridharan Iyengar;Chalapathy Neti

  • Affiliations:
  • IBM TJ Watson Research Center, Yorktown Heights, NY;IBM TJ Watson Research Center, Yorktown Heights, NY;IBM TJ Watson Research Center, Yorktown Heights, NY

  • Venue:
  • CIVR'03 Proceedings of the 2nd international conference on Image and video retrieval
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper reviews definitions of audio-visual synchrony and examines their empirical behaviour on test sets up to 200 times larger than used by other authors. The results give new insights into the practical utility of existing synchrony definitions and justify application of audio-visual synchrony techniques to the problem of active speaker localisation in broadcast video. Performance is evaluated using a test set of twelve clips of alternating speakers from the multiple speaker CUAVE corpus. Accuracy of 76% is obtained for the task of identifying the active member of a speaker pair at different points in time, comparable to performance given by two purely video image-based schemes. Accuracy of 65% is obtained on the more challenging task of locating a point within a 100×100 pixel square centered on the active speaker's mouth without no prior face detection; the performance upper bound if perfect face detection were available is 69%. This result is significantly better than two purely video image-based schemes.