Proceedings of the 27th annual conference on Computer graphics and interactive techniques
Fast Approximate Energy Minimization via Graph Cuts
IEEE Transactions on Pattern Analysis and Machine Intelligence
Graphcut textures: image and video synthesis using graph cuts
ACM SIGGRAPH 2003 Papers
Interactive digital photomontage
ACM SIGGRAPH 2004 Papers
Content-preserving warps for 3D video stabilization
ACM SIGGRAPH 2009 papers
An iterative image registration technique with an application to stereo vision
IJCAI'81 Proceedings of the 7th international joint conference on Artificial intelligence - Volume 2
Beyond pixels: exploring new representations and applications for motion analysis
Beyond pixels: exploring new representations and applications for motion analysis
Deformable Model Fitting by Regularized Landmark Mean-Shift
International Journal of Computer Vision
Towards Moment Imagery: Automatic Cinemagraphs
CVMP '11 Proceedings of the 2011 Conference for Visual Media Production
Selectively de-animating video
ACM Transactions on Graphics (TOG) - SIGGRAPH 2012 Conference Proceedings
Cliplets: juxtaposing still and dynamic imagery
Proceedings of the 25th annual ACM symposium on User interface software and technology
A tool for automatic cinemagraphs
Proceedings of the 20th ACM international conference on Multimedia
Automated video looping with progressive dynamism
ACM Transactions on Graphics (TOG) - SIGGRAPH 2013 Conference Proceedings
Hi-index | 0.00 |
Cinemagraphs are a popular new type of visual media that lie in-between photos and video; some parts of the frame are animated and loop seamlessly, while other parts of the frame remain completely still. Cinemagraphs are especially effective for portraits because they capture the nuances of our dynamic facial expressions. We present a completely automatic algorithm for generating portrait cinemagraphs from a short video captured with a hand-held camera. Our algorithm uses a combination of face tracking and point tracking to segment face motions into two classes: gross, large-scale motions that should be removed from the video, and dynamic facial expressions that should be preserved. This segmentation informs a spatially-varying warp that removes the large-scale motion, and a graph-cut segmentation of the frame into dynamic and still regions that preserves the finer-scale facial expression motions. We demonstrate the success of our method with a variety of results and a comparison to previous work.