Vector model in support of versatile georeferenced video search

  • Authors:
  • Seon Ho Kim;Sakire Arslan Ay;Byunggu Yu;Roger Zimmermann

  • Affiliations:
  • University of the District of Columbia, Washington, DC, DC, USA;University of Southern California, Los Angeles, CA, USA;University of the District of Columbia, Washington, DC, DC, USA;National University of Singapore, Singapore, Singapore

  • Venue:
  • MMSys '10 Proceedings of the first annual ACM SIGMM conference on Multimedia systems
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Increasingly geographic properties are being associated with videos, especially those captured from mobile cameras. The meta data from camera-attached sensors can be used to model the coverage area of the scene as a spatial object such that videos can be organized, indexed and searched based on their field of views (FOV). The most accurate representation of an FOV is through the geometric shape of a circular sector. However, spatial search and indexing methods are traditionally optimized for rectilinear shapes because of their simplicity. Established methods often use an approximation shape, such as a minimum bounding rectangle (MBR), to efficiently filter a large archive for possibly matching candidates. A second, refinement step is then applied to perform the time-consuming, precise matching function. MBR estimation has been successful for general spatial overlap queries, however it provides limited flexibility for georeferenced video search. In this study we propose a novel vector-based model for FOV estimation which provides a more versatile basis for georeferenced video search while providing competitive performance for the filter step. We demonstrate how the vector model can provide a unified method to perform traditional overlap queries while also enabling searches that, for example, concentrate on the vicinity of the camera's position or harness its view direction. To the best of our knowledge no comparable technique exists today.