Video temporal super-resolution based on self-similarity

  • Authors:
  • Mihoko Shimano;Takahiro Okabe;Imari Sato;Yoichi Sato

  • Affiliations:
  • PRESTO, Japan Science and Technology Agency and The University of Tokyo;The University of Tokyo;National Institute of Informatics;The University of Tokyo

  • Venue:
  • ACCV'10 Proceedings of the 10th Asian conference on Computer vision - Volume Part I
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a method for making temporal super-resolution video from a single video by exploiting the self-similarity that exists in the spatio-temporal domain of videos. Temporal super-resolution is inherently ill-posed problem because there are an infinite number of high temporal resolution frames that can produce the same low temporal resolution frame. The key idea in this work to solve this ambiguity is exploiting self-similarity, i.e., a self-similar appearance that represents integrated motion of objects during each exposure time of videos with different temporal resolutions. In contrast with other methods that try to generate plausible intermediate frames based on temporal interpolation, our method can increase the temporal resolution of a given video, for instance by resolving one frame to two frames. Based on the quantitative evaluation of experimental results, we demonstrate that our method can generate enhanced videos with increased temporal resolution thereby recovering appearances of dynamic scenes.