Evaluation of Localized Semantics: Data, Methodology, and Experiments

  • Authors:
  • Kobus Barnard;Quanfu Fan;Ranjini Swaminathan;Anthony Hoogs;Roderic Collins;Pascale Rondot;John Kaufhold

  • Affiliations:
  • Computer Science Department, The University of Arizona, Tucson, USA 85721-0077;Computer Science Department, The University of Arizona, Tucson, USA 85721-0077;Computer Science Department, The University of Arizona, Tucson, USA 85721-0077;GE Global Research, Schenectady, USA 12309;GE Global Research, Schenectady, USA 12309;Aeronautics, Lockheed Martin Corp., Ft. Worth, USA 76108;Advanced Concepts Business Unit, SAIC Corp., McLean, USA 22102

  • Venue:
  • International Journal of Computer Vision
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a new data set of 1014 images with manual segmentations and semantic labels for each segment, together with a methodology for using this kind of data for recognition evaluation. The images and segmentations are from the UCB segmentation benchmark database (Martin et al., in International conference on computer vision, vol. II, pp. 416---421, 2001). The database is extended by manually labeling each segment with its most specific semantic concept in WordNet (Miller et al., in Int. J. Lexicogr. 3(4):235---244, 1990). The evaluation methodology establishes protocols for mapping algorithm specific localization (e.g., segmentations) to our data, handling synonyms, scoring matches at different levels of specificity, dealing with vocabularies with sense ambiguity (the usual case), and handling ground truth regions with multiple labels. Given these protocols, we develop two evaluation approaches. The first measures the range of semantics that an algorithm can recognize, and the second measures the frequency that an algorithm recognizes semantics correctly. The data, the image labeling tool, and programs implementing our evaluation strategy are all available on-line (kobus.ca//research/data/IJCV_2007). We apply this infrastructure to evaluate four algorithms which learn to label image regions from weakly labeled data. The algorithms tested include two variants of multiple instance learning (MIL), and two generative multi-modal mixture models. These experiments are on a significantly larger scale than previously reported, especially in the case of MIL methods. More specifically, we used training data sets up to 37,000 images and training vocabularies of up to 650 words. We found that one of the mixture models performed best on image annotation and the frequency correct measure, and that variants of MIL gave the best semantic range performance. We were able to substantively improve the performance of MIL methods on the other tasks (image annotation and frequency correct region labeling) by providing an appropriate prior.