How do we deep-link?: leveraging user-contributed time-links for non-linear video access

  • Authors:
  • Raynor Vliegendhart;Babak Loni;Martha Larson;Alan Hanjalic

  • Affiliations:
  • Multimedia Information Retrieval Lab, Delft University of Technology, Delft, The Netherlands;Multimedia Information Retrieval Lab, Delft University of Technology, Delft, The Netherlands;Multimedia Information Retrieval Lab, Delft University of Technology, Delft, The Netherlands;Multimedia Information Retrieval Lab, Delft University of Technology, Delft, The Netherlands

  • Venue:
  • Proceedings of the 21st ACM international conference on Multimedia
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper studies a new way of accessing videos in a non-linear fashion. Existing non-linear access methods allow users to jump into videos at points that depict specific visual concepts or that are likely to elicit affective reactions. We believe that deep-link comments, which occur unprompted on social video sharing platforms, offer a new opportunity beyond existing methods. With deep-link comments, viewers express themselves about a particular moment in a video by including a time-code. Deep-link comments are special because they reflect viewer perceptions of noteworthiness, that include, but extend beyond depicted conceptual content and induced affective reactions. Based on deep-link comments collected from YouTube, we develop a Viewer Expressive Reaction Variety (VERV) taxonomy that captures how viewers deep-link. We validate the taxonomy with a user study on a crowdsourcing platform and discuss how it extends conventional relevance criteria. We carry out experiments which show that deep-link comments can be automatically filtered and sorted into VERV categories.