Learning Spatiotemporal T-Junctions for Occlusion Detection

  • Authors:
  • Nicholas Apostoloff;Andrew Fitzgibbon

  • Affiliations:
  • University of Oxford;University of Oxford

  • Venue:
  • CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

The goal of motion segmentation and layer extraction can be viewed as the detection and localization of occluding surfaces. A feature that has been shown to be a particularly strong indicator of occlusion, in both computer vision and neuroscience, is the T-junction; however, little progress has been made in T-junction detection. One reason for this is the difficulty in distinguishing false T-junctions (i.e. those not on an occluding edge) and real T-junctions in cluttered images. In addition to this, their photometric profile alone is not enough for reliable detection. This paper overcomes the first problem by searching for T-junctions not in space, but in space-time. This removes many false T-junctions and creates a simpler image structure to explore. The second problem is mitigated by learning the appearance of T-junctions in these spatio-temporal images. An RVM T-junction classifier is learnt from hand-labelled data using SIFT to capture its redundancy. This detector is then demonstrated in a novel occlusion detector that fuses Canny edges and T-junctions in the spatio-temporal domain to detect occluding edges in the spatial domain.