QuMinS: Fast and scalable querying, mining and summarizing multi-modal databases

  • Authors:
  • Robson L. F. Cordeiro;Fan Guo;Donna S. Haverkamp;James H. Horne;Ellen K. Hughes;Gunhee Kim;Luciana A. S. Romani;Priscila P. Coltri;Tamires T. Souza;Agma J. M. Traina;Caetano Traina, Jr.;Christos Faloutsos

  • Affiliations:
  • -;-;-;-;-;-;-;-;-;-;-;-

  • Venue:
  • Information Sciences: an International Journal
  • Year:
  • 2014

Quantified Score

Hi-index 0.07

Visualization

Abstract

Given a large image set, in which very few images have labels, how to guess labels for the remaining majority? How to spot images that need brand new labels different from the predefined ones? How to summarize these data to route the user's attention to what really matters? Here we answer all these questions. Specifically, we propose QuMinS, a fast, scalable solution to two problems: (i) Low-labor labeling (LLL) - given an image set, very few images have labels, find the most appropriate labels for the rest; and (ii) Mining and attention routing - in the same setting, find clusters, the top-N"O outlier images, and the N"R images that best represent the data. Experiments on satellite images spanning up to 2.25 GB show that, contrasting to the state-of-the-art labeling techniques, QuMinS scales linearly on the data size, being up to 40 times faster than top competitors (GCap), still achieving better or equal accuracy, it spots images that potentially require unpredicted labels, and it works even with tiny initial label sets, i.e., nearly five examples. We also report a case study of our method's practical usage to show that QuMinS is a viable tool for automatic coffee crop detection from remote sensing images.