Cross-modal prediction in audio-visual communication

  • Authors:
  • R. R. Rao;Tsuhan Chen

  • Affiliations:
  • Georgia Inst. of Technol., Atlanta, GA, USA;-

  • Venue:
  • ICASSP '96 Proceedings of the Acoustics, Speech, and Signal Processing, 1996. on Conference Proceedings., 1996 IEEE International Conference - Volume 04
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a novel means for predicting the shape of a person's mouth from the corresponding speech signal and explore applications of this prediction to video coding. The prediction is accomplished by modeling the probability distribution of the audiovisual features by a Gaussian mixture density. The optimal estimate for the visual features given the acoustic features can then be computed using this probability distribution. The ability to predict a person's mouth shape from the corresponding audio leads to a number of interesting joint audio-video coding strategies. In the cross-modal predictive coding system described, a model-based video coder compares measured visual parameters with predicted visual parameters, and sends the difference between the two to the receiver. Since the decoder also receives the acoustic data, it can form the prediction and then reconstruct the original parameters by adding the transmitted error signal.