Dynamic texture synthesis in space with a spatio-temporal descriptor

  • Authors:
  • Rocio A. Lizarraga-Morales;Yimo Guo;Guoying Zhao;Matti Pietikäinen

  • Affiliations:
  • DICIS, Universidad de Guanajauto, Salamanca, Guanajuato, Mexico;Center for Machine Vision Research, Department of Electrical and Information Engineering, University of Oulu, Finland;Center for Machine Vision Research, Department of Electrical and Information Engineering, University of Oulu, Finland;Center for Machine Vision Research, Department of Electrical and Information Engineering, University of Oulu, Finland

  • Venue:
  • ACCV'12 Proceedings of the 11th international conference on Computer Vision - Volume Part I
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Dynamic textures are image sequences recording texture in motion. Given a sample video, the goal of synthesis is to create a new sequence enlarged in spatial and/or temporal domain, which looks perceptually similar to the input. Most synthesis methods are mainly focused on extending sequences only in the temporal domain. In this paper, we propose a dynamic texture synthesis approach for spatial domain, where we aim to enlarge the frame size while preserving the aspect and motion of the original video. For this purpose, we use a patch-based synthesis method based on LBP-TOP features. In our approach, 3D patch regions from the input are selected and copied to an output sequence. Usually, in other patch-based approaches, the selection of the patches is based only in the color, which cannot capture the spatial and temporal information, causing an unnatural look in the output. In contrast, we propose to use the LBP-TOP operator, which implicitly represents information about appearance, dynamics and correlation between frames. The experiments show that the use of the LBP-TOP improves the performance of other methods giving a good description of the structure and motion of dynamic textures without generating visible discontinuities or artifacts.