FPGA architecture for static background subtraction in real time
SBCCI '06 Proceedings of the 19th annual symposium on Integrated circuits and systems design
Object delineation by κ-connected components
EURASIP Journal on Advances in Signal Processing
Links Between Image Segmentation Based on Optimum-Path Forest and Minimum Cut in Graph
Journal of Mathematical Imaging and Vision
Spatiotemporal region enhancement and merging for unsupervized object segmentation
Journal on Image and Video Processing
Video segmentation for markerless motion capture in unconstrained environments
ISVC'07 Proceedings of the 3rd international conference on Advances in visual computing - Volume Part II
Video-object segmentation and 3D-trajectory estimation for monocular video sequences
Image and Vision Computing
Reconfigurable Morphological Image Processing Accelerator for Video Object Segmentation
Journal of Signal Processing Systems
Constrained region-growing and edge enhancement towards automated semantic video object segmentation
ACIVS'06 Proceedings of the 8th international conference on Advanced Concepts For Intelligent Vision Systems
What can we learn from biological vision studies for human motion segmentation?
ISVC'06 Proceedings of the Second international conference on Advances in Visual Computing - Volume Part II
Hi-index | 0.00 |
In this letter, we propose a Bayesian approach to video object segmentation. Our method consists of two stages. In the first stage, we partition the video data into a set of three-dimensional (3-D) watershed volumes, where each watershed volume is a series of corresponding two-dimensional (2-D) image regions. These 2-D image regions are obtained by applying to each image frame the marker-controlled watershed segmentation, where the markers are extracted by first generating a set of initial markers via temporal tracking and then refining the markers with two shrinking schemes: the iterative adaptive erosion and the verification against a presimplified watershed segmentation. Next, in the second stage, we use a Markov random field to model the spatio-temporal relationship among the 3-D watershed volumes that are obtained from the first stage. Then, the desired video objects can be extracted by merging watershed volumes having similar motion characteristics within a Bayesian framework. A major advantage of this method is that it can take into account the global motion information contained in each watershed volume. Our experiments have shown that the proposed method has potential for extracting moving objects from a video sequence.