Discovering panoramas in web videos

  • Authors:
  • Feng Liu;Yu-hen Hu;Michael L. Gleicher

  • Affiliations:
  • University of Wisconsin-Madison, Madison, WI, USA;University of Wisconsin-Madison, Madison, WI, USA;University of Wisconsin-Madison, Madison, WI, USA

  • Venue:
  • MM '08 Proceedings of the 16th ACM international conference on Multimedia
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

While methods for stitching panoramas have been successful given proper source images, providing these source images still remains a burden. In this paper, we present a method to discover panoramic source images within widely available web videos. The challenge comes from the fact that many of these videos are not recorded intentionally for stitching panoramas. Our method aims to find segments within a video that work as panorama sources. Specifically, we determine a video segment to be a valid panorama source according to the following three criteria. First, its camera motion should cover a wide field-of-view of the scene. Second, its frames should be "mosaicable", which states that the inter-frame motion should observe the underlying conditions for stitching a panorama. Third, its frames should have good image quality. Based on these criteria, we formulate discovering panoramas in a video as an optimization problem that aims to find an optimal set of video segments as panorama sources. After discovering these panorama sources, we synthesize regular scene panoramas using them. When significant dynamics is detected in the sources, we fuse the dynamics into the scene panoramas to make activity synopses to convey the dynamics. Our experiment of querying panoramas from YouTube confirms the feasibility of using web videos as panorama sources and demonstrates the effectiveness of our method.