Text or pictures? an eyetracking study of how people view digital video surrogates

  • Authors:
  • Anthony Hughes;Todd Wilkens;Barbara M. Wildemuth;Gary Marchionini

  • Affiliations:
  • Interaction Design Lab, School of Information and Library Science, University of North Carolina at Chapel Hill, Chapel Hill, NC;Interaction Design Lab, School of Information and Library Science, University of North Carolina at Chapel Hill, Chapel Hill, NC;Interaction Design Lab, School of Information and Library Science, University of North Carolina at Chapel Hill, Chapel Hill, NC;Interaction Design Lab, School of Information and Library Science, University of North Carolina at Chapel Hill, Chapel Hill, NC

  • Venue:
  • CIVR'03 Proceedings of the 2nd international conference on Image and video retrieval
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

One important user-oriented facet of digital video retrieval research involves how to abstract and display digital video surrogates. This study reports on an investigation of digital video results pages that use textual and visual surrogates. Twelve subjects selected relevant video records from results lists containing titles, descriptions, and three keyframes for ten different search tasks. All subjects were eye-tracked to determine where, when, and how long they looked at text and image surrogates. Participants looked at and fixated on titles and descriptions statistically reliably more than on the images. Most people used the text as an anchor from which to make judgments about the search results and the images as confirmatory evidence for their selections. No differences were found whether the layout presented text or images in left to right order.