Depicting fire and other gaseous phenomena using diffusion processes
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
International Journal of Computer Vision
Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Generative Method for Textured Motion: Analysis and Synthesis
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part I
Vision-based control of 3D facial animation
Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
Graphcut textures: image and video synthesis using graph cuts
ACM SIGGRAPH 2003 Papers
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Dynamic Texture Recognition Using Local Binary Patterns with an Application to Facial Expressions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Higher Order SVD Analysis for Dynamic Texture Synthesis
IEEE Transactions on Image Processing
Physically-based realistic fire rendering
NPH'06 Proceedings of the Second Eurographics conference on Natural Phenomena
Dynamic texture synthesis in space with a spatio-temporal descriptor
ACCV'12 Proceedings of the 11th international conference on Computer Vision - Volume Part I
Hi-index | 0.00 |
Dynamic textures are image sequences with visual pattern repetition in time and space, such as smoke, flames, moving objects and so on. Dynamic texture synthesis is to provide a continuous and infinitely varying stream of images by doing operations on dynamic textures. Considering that the previous video texture method provides high-quality visual results, but its representation does not well explore the temporal correlation among frames, we develop a novel spatial temporal descriptor for frame description accompanied with a similarity measure on the basis of the video texture method. Compared with the previous one, our method considers both the spatial and temporal domains of video sequences in representation; moreover, combines the local and global description on each spatial-temporal plane. From experimental results, the proposed method achieves better performance in both the syntheses of natural scene and human motion. Especially, it has the characteristic to be robust to noise in remodeling videos into infinite time domain.