Scene Segmentation from Visual Motion Using Global Optimization
IEEE Transactions on Pattern Analysis and Machine Intelligence
Digital video processing
Neural networks for pattern recognition
Neural networks for pattern recognition
Region-based parametric motion segmentation using color information
Graphical Models and Image Processing
Cooperative Robust Estimation Using Layers of Support
IEEE Transactions on Pattern Analysis and Machine Intelligence
Combining Intensity and Motion for Incremental Segmentation and Tracking Over Long Image Sequences
ECCV '92 Proceedings of the Second European Conference on Computer Vision
A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms
SMBV '01 Proceedings of the IEEE Workshop on Stereo and Multi-Baseline Vision (SMBV'01)
Detection of object motion regions in aerial image pairs with a multilayer Markovian model
IEEE Transactions on Image Processing
Hi-index | 0.00 |
Traditional motion segmentation techniques generally depend on a pre-estimated optical flow. Unfortunately, the lack of precision over edges of most popular motion estimation methods makes them unsuited to recover the exact shape of moving objects. In this contribution, we present an original motion segmentation technique using a K-nearest-neighbor-based fusion of spatial and temporal label cues. Our fusion model takes as input a spatial segmentation of a still image and an estimated version of the motion label field. It minimizes an energy function made of spatial and temporal label cues extracted from the two input fields. The algorithm proposed is intuitive, simple to implement and remains sufficiently general to be applied to other segmentation problems. Furthermore, the method doesn't depend on the estimation of any threshold or any weighting function between the spatial and temporal energy terms, as is sometimes required by energy-based segmentation models. Experiments on synthetic and real image sequences indicate that the proposed method is robust and accurate.