Kernels for Generalized Multiple-Instance Learning

  • Authors:
  • Qingping Tao;Stephen D. Scott;N. V. Vinodchandran;Thomas Takeo Osugi;Brandon Mueller

  • Affiliations:
  • GC Image, LLC, Lincoln;University of Nebraska, Lincoln;University of Nebraska, Lincoln;Sphere Communications, Lincolnshire;Gallup, Inc., Omaha

  • Venue:
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Year:
  • 2008

Quantified Score

Hi-index 0.14

Visualization

Abstract

The multiple-instance learning (MIL) model has been successful in numerous application areas. Recently, a generalization of this model and an algorithm for it were introduced, showing significant advantages over the conventional MIL model on certain application areas. Unfortunately, that algorithm is not scalable to high dimensions. We adapt that algorithm to one using a support vector machine with our new kernel k\wedge. This reduces the time complexity from exponential in the dimension to polynomial. Computing our new kernel is equivalent to counting the number of boxes in a discrete, bounded space that contain at least one point from each of two multisets. We show that this problem is #P-complete, but then give a fully polynomial randomized approximation scheme (FPRAS) for it. We then extend k\wedge by enriching its representation into a new kernel kmin, and also consider a normalized version of k\wedge that we call k\wedge/\vee (which may or may not not be a kernel, but whose approximation yielded positive semidefinite Gram matrices in practice). We then empirically evaluate all three measures on data from content-based image retrieval, biological sequence analysis, and the musk data sets. We found that our kernels performed well on all data sets relative to algorithms in the conventional MIL model.