Stroke Surfaces: Temporally Coherent Artistic Animations from Video
IEEE Transactions on Visualization and Computer Graphics
Motion features to enhance scene segmentation in active visual attention
Pattern Recognition Letters
Fixation as a Mechanism for Stabilization of Short Image Sequences
International Journal of Computer Vision
Optimal instantaneous rigid motion estimation insensitive to local minima
Computer Vision and Image Understanding
A Bayesian Approach to Deformed Pattern Matching of Iris Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
Superpipelined high-performance optical-flow computation architecture
Computer Vision and Image Understanding
People detection in low-resolution video with non-stationary background
Image and Vision Computing
Optic flow from unstable sequences through local velocity constancy maximization
Image and Vision Computing
Image processing architecture for local features computation
ARC'07 Proceedings of the 3rd international conference on Reconfigurable computing: architectures, tools and applications
Optical flow and total least squares solution for multi-scale data in an over-determined system
ISVC'07 Proceedings of the 3rd international conference on Advances in visual computing - Volume Part II
A compact harmonic code for early vision based on anisotropic frequency channels
Computer Vision and Image Understanding
Perception of linear and nonlinear motion properties using a FACS validated 3D facial model
Proceedings of the 7th Symposium on Applied Perception in Graphics and Visualization
Object tracking on FPGA-based smart cameras using local oriented energy and phase features
Proceedings of the Fourth ACM/IEEE International Conference on Distributed Smart Cameras
Proceedings of the international conference on Multimedia
Fine grain pipeline architecture for high performance phase-based optical flow computation
Journal of Systems Architecture: the EUROMICRO Journal
Maximum margin distance learning for dynamic texture recognition
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part II
A two-level real-time vision machine combining coarse- and fine-grained parallelism
Journal of Real-Time Image Processing
International Journal of Computer Vision
On-chip ego-motion estimation based on optical flow
ARC'11 Proceedings of the 7th international conference on Reconfigurable computing: architectures, tools and applications
A psychologically-inspired match-score fusion mode for video-based facial expression recognition
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
Unequal-error-protection codes in SRAMs for mobile multimedia applications
Proceedings of the International Conference on Computer-Aided Design
Improving accuracy of optical flow of heeger's original method on biomedical images
ICIAR'10 Proceedings of the 7th international conference on Image Analysis and Recognition - Volume Part I
Optical flow estimation on omnidirectional images: an adapted phase based method
ICISP'12 Proceedings of the 5th international conference on Image and Signal Processing
A multi-resolution approach for massively-parallel hardware-friendly optical flow estimation
Journal of Visual Communication and Image Representation
Phase-based video motion processing
ACM Transactions on Graphics (TOG) - SIGGRAPH 2013 Conference Proceedings
Hi-index | 0.00 |
We introduce a new technique for estimating the optical flow field, starting from image sequences. As suggested by Fleet and Jepson (1990), we track contours of constant phase over time, since these are more robust to variations in lighting conditions and deviations from pure translation than contours of constant amplitude. Our phase-based approach proceeds in three stages. First, the image sequence is spatially filtered using a bank of quadrature pairs of Gabor filters, and the temporal phase gradient is computed, yielding estimates of the velocity component in directions orthogonal to the filter pairs' orientations. Second, a component velocity is rejected if the corresponding filter pair's phase information is not linear over a given time span. Third, the remaining component velocities at a single spatial location are combined and a recurrent neural network is used to derive the full velocity. We test our approach on several image sequences, both synthetic and realistic.