In defence of negative mining for annotating weakly labelled data

  • Authors:
  • Parthipan Siva;Chris Russell;Tao Xiang

  • Affiliations:
  • Queen Mary, University of London, London, United Kingdom;Queen Mary, University of London, London, United Kingdom;Queen Mary, University of London, London, United Kingdom

  • Venue:
  • ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part III
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a novel approach to annotating weakly labelled data. In contrast to many existing approaches that perform annotation by seeking clusters of self-similar exemplars (minimising intra-class variance), we perform image annotation by selecting exemplars that have never occurred before in the much larger, and strongly annotated, negative training set (maximising inter-class variance). Compared to existing methods, our approach is fast, robust, and obtains state of the art results on two challenging data-sets --- voc2007 (all poses), and the msr2 action data-set, where we obtain a 10% increase. Moreover, this use of negative mining complements existing methods, that seek to minimize the intra-class variance, and can be readily integrated with many of them.