Dynamic key frame presentation techniques for augmenting video browsing

  • Authors:
  • Tony Tse;Gary Marchionini;Wei Ding;Laura Slaughter;Anita Komlodi

  • Affiliations:
  • University of Maryland, College Park, MD;University of Maryland, College Park, MD;University of Maryland, College Park, MD;University of Maryland, College Park, MD;University of Maryland, College Park, MD

  • Venue:
  • AVI '98 Proceedings of the working conference on Advanced visual interfaces
  • Year:
  • 1998

Quantified Score

Hi-index 0.00

Visualization

Abstract

Because of unique temporal and spatial properties of video data, different techniques for summarizing videos have been proposed. Key frames extracted directly from video inform users about content without requiring them to view the entire video. As part of ongoing work to develop video browsing interfaces, several interface displays based on key frames were investigated. Variations on dynamic key frame "slide shows" were examined and compared to a static key frame "filmstrip" display. The slide show mechanism displays key frames in rapid succession and is designed to facilitate visual browsing by exploiting human perceptual capabilities. User studies were conducted in a series of three experiments. Key frame display rate, number of simultaneous displays, and user perception were investigated as a function of user performance in object recognition and gist determination tasks. No significant performance degradation was detected at display rates up to 8 key frames per second, but performance degraded significantly at higher rates. Performance on gist determination tasks degraded less severely than performance on object recognition tasks as display rates increased. Furthermore, gist determination performance dropped significantly between three and four simultaneous slide shows in a single display. Users generally preferred key frame filmstrips to dynamic displays, although objective measures of performance were mixed. Implications for visual interface design and further questions for future research are provided.