Learning reconfigurable scene representation by tangram model

  • Authors:
  • Jun Zhu; Tianfu Wu;Song-Chun Zhu; Xiaokang Yang; Wenjun Zhang

  • Affiliations:
  • Institute of Image Communication and Information Processing, Shanghai Jiao Tong University, USA;Lotus Hill Institute for Computer Vision and Information Science, USA;Lotus Hill Institute for Computer Vision and Information Science, USA;Institute of Image Communication and Information Processing, Shanghai Jiao Tong University, USA;Institute of Image Communication and Information Processing, Shanghai Jiao Tong University, USA

  • Venue:
  • WACV '12 Proceedings of the 2012 IEEE Workshop on the Applications of Computer Vision
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes a method to learn reconfigurable and sparse scene representation in the joint space of spatial configuration and appearance in a principled way. We call it the tangram model, which has three properties: (1) Unlike fixed structure of the spatial pyramid widely used in the literature, we propose a compositional shape dictionary organized in an And-Or directed acyclic graph (AOG) to quantize the space of spatial configurations. (2) The shape primitives (called tans) in the dictionary can be described by using any "off-the-shelf" appearance features according to different tasks. (3) A dynamic programming (DP) algorithm is utilized to learn the globally optimal parse tree in the joint space of spatial configuration and appearance. We demonstrate the tangram model in both a generative learning formulation and a discriminative matching kernel. In experiments, we show that the tangram model is capable of capturing meaningful spatial configurations as well as appearance for various scene categories, and achieves state-of-the-art classification performance on the LSP 15-class scene dataset and the MIT 67-class indoor scene dataset.