Automatic generation of video narratives from shared UGC

  • Authors:
  • Vilmos Zsombori;Michael Frantzis;Rodrigo Laiola Guimaraes;Marian Florin Ursu;Pablo Cesar;Ian Kegel;Roland Craigie;Dick C.A. Bulterman

  • Affiliations:
  • Goldsmiths, University of London, London, United Kingdom;Goldsmiths, University of London, London, United Kingdom;CWI: Centrum Wiskunde & Informatica, Amsterdam, Netherlands;Goldsmiths, University of London, London, United Kingdom;CWI: Centrum Wiskunde & Informatica, Amsterdam, Netherlands;BT Innovate & Design, Martlesham Heath, Ipswich, United Kingdom;BT Innovate & Design, Martlesham Heath, Ipswich, United Kingdom;CWI: Centrum Wiskunde & Informatica, Amsterdam, Netherlands

  • Venue:
  • Proceedings of the 22nd ACM conference on Hypertext and hypermedia
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper introduces an evaluated approach to the automatic generation of video narratives from user generated content gathered in a shared repository. In the context of social events, end-users record video material with their personal cameras and upload the content to a common repository. Video narrative techniques, implemented using Narrative Structure Language (NSL) and ShapeShifting Media, are employed to automatically generate movies recounting the event. Such movies are personalized according to the preferences expressed by each individual end-user, for each individual viewing. This paper describes our prototype narrative system, MyVideos, deployed as a web application, and reports on its evaluation for one specific use case: assembling stories of a school concert by parents, relatives and friends. The evaluations carried out through focus groups, interviews and field trials, in the Netherlands and UK, provided validating results and further insights into this approach.