Video Epitomes

  • Authors:
  • Vincent Cheung;Brendan J. Frey;Nebojsa Jojic

  • Affiliations:
  • Electrical and Computer Engineering, University of Toronto, Toronto, Canada;Electrical and Computer Engineering, University of Toronto, Toronto, Canada;Machine Learning and Applied Statistics, Microsoft Research, Redmond, USA 98052

  • Venue:
  • International Journal of Computer Vision
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recently, "epitomes" were introduced as patch-based probability models that are learned by compiling together a large number of examples of patches from input images. In this paper, we describe how epitomes can be used to model video data and we describe significant computational speedups that can be incorporated into the epitome inference and learning algorithm. In the case of videos, epitomes are estimated so as to model most of the small space-time cubes from the input data. Then, the epitome can be used for various modeling and reconstruction tasks, of which we show results for video super-resolution, video interpolation, and object removal. Besides computational efficiency, an interesting advantage of the epitome as a representation is that it can be reliably estimated even from videos with large amounts of missing data. We illustrate this ability on the task of reconstructing the dropped frames in video broadcast using only the degraded video and also in denoising a severely corrupted video.