What's next?: emergent storytelling from video collection

  • Authors:
  • Edward Yu-Te Shen;Henry Lieberman;Glorianna Davenport

  • Affiliations:
  • MIT Media Lab, Cambridge, MA, USA;MIT Media Lab, Cambridge, MA, USA;MIT Media Lab, Cambridge, MA, USA

  • Venue:
  • Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.01

Visualization

Abstract

In the world of visual storytelling, narrative development relies on a particular temporal ordering of shots and sequences and scenes. Rarely is this ordering cast in stone. Rather, the particular ordering of a story reflects a myriad of interdependent decisions about the interplay of structure, narrative arc and character development. For storytellers, particularly those developing their narratives from large documentary archives, it would be helpful to have a visualization system partnered with them to present suggestions for the most compelling story path. We present Storied Navigation, a video editing system that helps authors compose a sequence of scenes that tell a story, by selecting from a corpus of annotated clips. The clips are annotated in unrestricted natural language. Authors can also type a story in unrestricted English, and the system finds possibilities for clips that best match high-level elements of the story. Beyond simple keyword matching, these elements can include the characters, emotions, themes, and story structure. Authors can also interactively replace existing scenes or predict the next scene to continue a story, based on these characteristics. Storied Navigation gives the author the feel of brainstorming about the story rather than simply editing the media.