Accurate, Real-Time, Unadorned Lip Tracking

  • Authors:
  • Robert Kaucic;Andrew Blake

  • Affiliations:
  • -;-

  • Venue:
  • ICCV '98 Proceedings of the Sixth International Conference on Computer Vision
  • Year:
  • 1998

Quantified Score

Hi-index 0.00

Visualization

Abstract

Human speech is inherently multi-modal, consisting of both audio and visual components. Recently researchers have shown that the incorporation of information about the position of the lips into acoustic speech recognisers enables robust recognition of noisy speech. In the case of Hidden Markov Model-recognition, we show that his happens because the visual signal stabilises the alignment of states. It is also shown that unadorned lips, both the inner and outer contours, can be robustly tracked in real time on general-purpose workstations. To accomplish this, efficient algorithms are employed which contain three key components: shape models, motion models, and focused colour feature detectors 驴 all of which are learnt from examples.