Leveraging probabilistic season and location context models for scene understanding

  • Authors:
  • Jie Yu;Jiebo Luo

  • Affiliations:
  • Kodak Research Labs, Rochester, NY, USA;Kodak Research Labs, Rochester, NY, USA

  • Venue:
  • CIVR '08 Proceedings of the 2008 international conference on Content-based image and video retrieval
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent research has shown the power of context-aware scene understanding in bridging the semantic gap between high-level semantic concepts and low-level image features. In this paper, we present a new method to exploit nonvisual context information from the season and location proximity in which pictures were taken to facilitate region (object) annotation in consumer photos. Our method does not require precise time and location from the capture device or user input. Instead, it learns from rough location (e.g., states in the US) and time (e.g., seasons) information, which can be obtained through picture metadata automatically or through minimal user input (e.g., grouping). In addition, the visual context within the image is obtained by analyzing the spatial relationships between different regions (objects) in the scene. Both visual and nonvisual context information are fused using a probabilistic graphical model to improve the accuracy of object region recognition. Our method has been evaluated on a database that consists of over 10,000 regions in more than 1000 images collected from both the Web and consumers. Experimental results show that incorporating the season and location context significantly improves the performance of region recognition.