Modeling urban scenes in the spatial-temporal space

  • Authors:
  • Jiong Xu;Qing Wang;Jie Yang

  • Affiliations:
  • School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, P.R. China;School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, P.R. China;School of Computer Science, Carnegie Mellon University, Pittsburgh, PA

  • Venue:
  • ACCV'10 Proceedings of the 10th Asian conference on Computer vision - Volume Part II
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a technique to simultaneously model 3D urban scenes in the spatial-temporal space using a collection of photos that span many years. We propose to use a middle level representation, building, to characterize significant structure changes in the scene. We first use structure-from-motion techniques to build 3D point clouds, which is a mixture of scenes from different periods of time. We then segment the point clouds into independent buildings using a hierarchical method, including coarse clustering on sparse points and fine classification on dense points based on the spatial distance of point clouds and the difference of visibility vectors. In the fine classification, we segment building candidates using a probabilistic model in the spatial-temporal space simultaneously. We employ a z-buffering based method to infer existence of each building in each image. After recovering temporal order of input images, we finally obtain 3D models of these buildings along the time axis. We present experiments using both toy building images captured from our lab and real urban scene images to demonstrate the feasibility of the proposed approach.