Hierarchical space tiling for scene modeling

  • Authors:
  • Shuo Wang;Yizhou Wang;Song-Chun Zhu

  • Affiliations:
  • Nat'l Engineering Lab for Video Technology Key Lab. of Machine Perception, Department of EECS, Peking University, Beijing, China,Center for Vision, Cognition, Learning and Arts (VCLA), Department ...;Nat'l Engineering Lab for Video Technology Key Lab. of Machine Perception, Department of EECS, Peking University, Beijing, China;Center for Vision, Cognition, Learning and Arts (VCLA), Department of Statistics, University of California, Los Angeles

  • Venue:
  • ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part II
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

A typical scene category, e.g., street and beach, contains an enormous number (e.g., in the order of 104 to 105) of distinct scene configurations that are composed of objects and regions of varying shapes in different layouts. A well-known representation that can effectively address such complexity is the family of compositional models; however, learning the structures of the hierarchical compositional models remains a challenging task in vision. The objective of this paper is to present an efficient method for learning such models from a set of scene configurations. We start with an over-complete representation called Hierarchical Space Tiling (HST), which quantizes the huge and continuous scene configuration space in an And-Or tree (AOT). This hierarchical AOT can generate a combinatorial number of configurations (in the order of 1031) through a small dictionary of elements. Then we estimate the HST/AOT model through a learning-by-parsing strategy, which iteratively updates the HST/AOT parameters while constructing the optimal parse trees for each training configuration. Finally we prune out the branches with zero or low probability to obtain a much smaller HST/AOT. The HST quantization allows us to transfer the challenging structure-learning problem to a tractable parameter-learning problem. We evaluate the representation in three aspects. (i) Coding efficiency. We show the learned representation can approximate valid configurations with less errors using smaller number of primitives than other popular representations. (ii) Semantic power of learning. The learned representation is less ambiguous in parsing configuration and has semantically meaningful inner concepts. It captures both the diversity and the frequency (prior) of the scene configurations. (iii) Scene classification. The model is not only fully generative but also yields discriminative scene classification performance which outperforms the state-of-the-art methods.