Video synchronization as one-class learning

  • Authors:
  • Hakan Haberdar;Shishir K. Shah

  • Affiliations:
  • University of Houston, Houston, Texas;University of Houston, Houston, Texas

  • Venue:
  • Proceedings of the 27th Conference on Image and Vision Computing New Zealand
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Synchronization of videos of the same scene recorded at different times is the first step in many applications related to video surveillance, remote sensing, and medical diagnosis. When a pair of corresponding frames from different videos is provided, synchronization of the rest of the frames is a relatively easy task. Unfortunately, this initial correspondence is usually not available. To avoid an exhaustive search for the initial match, most existing solutions rely either on prior information or additional hardware. It would be beneficial to have a method providing the initial match in an automated manner. In this paper, we investigate the feasibility of one--class learning for the problem of video synchronization. We propose a hybrid one--class learner that can assess a similarity score based on the visual features between two frames from different videos by combining outputs of Support Vector Machines and Replicator Neural Network. The learner first finds a small set of potentially corresponding frames. Then, the exact match is determined by minimizing the similarity error in the set. We apply proposed synchronization method to the videos of dynamic outdoor environments recorded by mobile platforms.