Local detection of occlusion boundaries in video
Image and Vision Computing
Multi-view Occlusion Reasoning for Probabilistic Silhouette-Based Dynamic Scene Reconstruction
International Journal of Computer Vision
Local occlusion detection under deformations using topological invariants
ECCV'10 Proceedings of the 11th European conference on computer vision conference on Computer vision: Part III
Classification and quantification of occlusion using hidden markov model
PReMI'11 Proceedings of the 4th international conference on Pattern recognition and machine intelligence
Detachable object detection with efficient model selection
EMMCVPR'11 Proceedings of the 8th international conference on Energy minimization methods in computer vision and pattern recognition
Recovery and Reasoning About Occlusions in 3D Using Few Cameras with Applications to 3D Tracking
International Journal of Computer Vision
Detecting spatiotemporal structure boundaries: beyond motion discontinuities
ACCV'09 Proceedings of the 9th Asian conference on Computer Vision - Volume Part II
Learning-Based symmetry detection in natural images
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part VII
Stereoscopizing cel animations
ACM Transactions on Graphics (TOG)
Hi-index | 0.00 |
The goal of motion segmentation and layer extraction can be viewed as the detection and localization of occluding surfaces. A feature that has been shown to be a particularly strong indicator of occlusion, in both computer vision and neuroscience, is the T-junction; however, little progress has been made in T-junction detection. One reason for this is the difficulty in distinguishing false T-junctions (i.e. those not on an occluding edge) and real T-junctions in cluttered images. In addition to this, their photometric profile alone is not enough for reliable detection. This paper overcomes the first problem by searching for T-junctions not in space, but in space-time. This removes many false T-junctions and creates a simpler image structure to explore. The second problem is mitigated by learning the appearance of T-junctions in these spatio-temporal images. An RVM T-junction classifier is learnt from hand-labelled data using SIFT to capture its redundancy. This detector is then demonstrated in a novel occlusion detector that fuses Canny edges and T-junctions in the spatio-temporal domain to detect occluding edges in the spatial domain.