Multi-video summary and skim generation of sensor-rich videos in geo-space

  • Authors:
  • Ying Zhang;Guanfeng Wang;Beomjoo Seo;Roger Zimmermann

  • Affiliations:
  • National University of Singapore, Singapore;National University of Singapore, Singapore;National University of Singapore, Singapore;National University of Singapore, Singapore

  • Venue:
  • Proceedings of the 3rd Multimedia Systems Conference
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

User-generated videos have become increasingly popular in recent years. Due to advances in camera technology it is now very easy and convenient to record videos with mobile devices, such as smartphones. Here we consider an application where users collect and share a large set of videos that are related to a geographic area, say a city. Such a repository can be a great source of information for prospective tourists when they plan to visit a city and would like to get a preview of its main areas. The challenge that we address is how to automatically create a preview video summary from a large set of source videos. The main features of our technique are that it is fully automatic and leverages meta-data sensor information which is acquired in conjunction with videos. The meta-data is collected from GPS and compass sensors and is used to describe the viewable scenes of the videos. Our method then proceeds in three steps through the analysis of the sensor data. First, we generate a single video summary. Shot boundaries are detected based on different motion types of camera movements and key frames are extracted related to motion patterns. Second, we build video skims for popular places (i.e., hotspots) aiming to provide maximal coverage of hotspot areas with minimal redundancy (per-spot multi-video summary). Finally, the individual hotspot skims are linked together to generate a pleasant video tour that visits all the popular places (multi-spot multi-video summary).