Dynamic texture synthesis using a spatial temporal descriptor

  • Authors:
  • Yimo Guo;Guoying Zhao;Jie Chen;Matti Pietikäinen;Zhengguang Xu

  • Affiliations:
  • Machine Vision Group, Department of Electrical and Information Engineering, University of Oulu, Finland and School of Information Engineering, University of Science and Technology, Beijing, China;Machine Vision Group, Department of Electrical and Information Engineering, University of Oulu, Finland;Machine Vision Group, Department of Electrical and Information Engineering, University of Oulu, Finland;Machine Vision Group, Department of Electrical and Information Engineering, University of Oulu, Finland;School of Information Engineering, University of Science and Technology, Beijing, China

  • Venue:
  • ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Dynamic textures are image sequences with visual pattern repetition in time and space, such as smoke, flames, moving objects and so on. Dynamic texture synthesis is to provide a continuous and infinitely varying stream of images by doing operations on dynamic textures. Considering that the previous video texture method provides high-quality visual results, but its representation does not well explore the temporal correlation among frames, we develop a novel spatial temporal descriptor for frame description accompanied with a similarity measure on the basis of the video texture method. Compared with the previous one, our method considers both the spatial and temporal domains of video sequences in representation; moreover, combines the local and global description on each spatial-temporal plane. From experimental results, the proposed method achieves better performance in both the syntheses of natural scene and human motion. Especially, it has the characteristic to be robust to noise in remodeling videos into infinite time domain.