Novel approaches for synthesizing video textures

  • Authors:
  • Wentao Fan;Nizar Bouguila

  • Affiliations:
  • Concordia Institute for Information Systems Engineering, Faculty of Engineering and Computer Science, Concordia University, Montreal, Qc, Canada H3G 2W1;Concordia Institute for Information Systems Engineering, Faculty of Engineering and Computer Science, Concordia University, Montreal, Qc, Canada H3G 2W1

  • Venue:
  • Expert Systems with Applications: An International Journal
  • Year:
  • 2012

Quantified Score

Hi-index 12.05

Visualization

Abstract

Video texture, a novel type of medium, can produce a new video with a continuously varying stream of images from a recorded video. A classic approach to generate video textures is to apply principal components analysis (PCA) for dimensionality reduction (i.e. extraction of frame signatures) and autoregressive (AR) process for prediction purposes. In this paper we investigate the use of other dimensionality reduction techniques to generate accurate video textures. Based on our experiments, the quality of video textures can be improved further. We also propose a new approach for generating video textures using probabilistic principal components analysis (PPCA) and Gaussian process dynamical model (GPDM) to synthesize video textures which contain frames that never appeared before and with similar motions as original videos. Furthermore, we propose two ways of generating online video textures by applying the incremental Isomap and incremental Spatio-temporal Isomap (IST-Isomap). Both approaches can produce good online video texture results. In particular, IST-Isomap, that we propose, is more suitable for sparse video data (e.g. cartoon).