Towards efficient context-specific video coding based on gaze-tracking analysis

  • Authors:
  • D. Agrafiotis;S. J. C. Davies;N. Canagarajah;D. R. Bull

  • Affiliations:
  • University of Bristol, Bristol, UK;University of Bristol, Bristol, UK;University of Bristol, Bristol, UK;University of Bristol, Bristol, UK

  • Venue:
  • ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

This article discusses a framework for model-based, context-dependent video coding based on exploitation of characteristics of the human visual system. The system utilizes variable-quality coding based on priority maps which are created using mostly context-dependent rules. The technique is demonstrated through two case studies of specific video context, namely open signed content and football sequences. Eye-tracking analysis is employed for identifying the characteristics of each context, which are subsequently exploited for coding purposes, either directly or through a gaze prediction model. The framework is shown to achieve a considerable improvement in coding efficiency.