Towards unsupervised semantic segmentation of street scenes from motion cues

  • Authors:
  • Hajar Sadeghi Sokeh;Stephen Gould

  • Affiliations:
  • The Australian National University, Canberra, ACT;The Australian National University, Canberra, ACT

  • Venue:
  • Proceedings of the 27th Conference on Image and Vision Computing New Zealand
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Motion provides a rich source of information about the world. It can be used as an important cue to analyse the behaviour of objects in a scene and consequently identify interesting locations within it. In this paper, given an unannotated video sequence of a dynamic scene from fixed viewpoint, we first present a set of useful motion features that can be efficiently extracted at each pixel by optical flow. Using these features, we then develop an algorithm that can extract motion topic models and identify semantically significant regions and landmarks in a complex scene from a short video sequence. For example, by watching a street scene our algorithm can extract meaningful regions such as roads and important landmarks such as parking spots. Our method is robust to complicating factors such as shadows and occlusions.