Models for video enrichment

  • Authors:
  • Benoît Encelle;Pierre-Antoine Champin;Yannick Prié;Olivier Aubert

  • Affiliations:
  • Université de Lyon, CNRS Université Lyon 1, LIRIS, UMR5205, F-69622, France, Lyon, France;Université de Lyon, CNRS Université Lyon 1, LIRIS, UMR5205, F-69622, France, Lyon, France;Université de Lyon, CNRS Université Lyon 1, LIRIS, UMR5205, F-69622, France, Lyon, France;Université de Lyon, CNRS Université Lyon 1, LIRIS, UMR5205, F-69622, France, Lyon, France

  • Venue:
  • Proceedings of the 11th ACM symposium on Document engineering
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Videos are commonly being augmented with additional content such as captions, images, audio, hyperlinks, etc., which are rendered while the video is being played. We call the result of this rendering "enriched videos". This article details an annotation-based approach for producing enriched videos: enrichment is mainly composed of textual annotations associated to temporal parts of the video that are rendered while playing it. The key notion of enriched video and associated concepts is first introduced and we second expose the models we have developed for annotating videos and for presenting annotations during the playing of the videos. Finally, an overview of a general workflow for producing/viewing enriched videos is presented. This workflow particularly illustrates the usage of the proposed models in order to improve the accessibility of videos for sensory disabled people.