Combining example selection with instance selection to speed up multiple-instance learning

  • Authors:
  • Liming Yuan;Jiafeng Liu;Xianglong Tang

  • Affiliations:
  • -;-;-

  • Venue:
  • Neurocomputing
  • Year:
  • 2014

Quantified Score

Hi-index 0.01

Visualization

Abstract

Recently, several instance selection-based methods have been presented to solve the multiple-instance learning (MIL) problem. The basic idea is converting MIL into standard supervised learning by selecting some representative instance prototypes from the training set. However, training examples are not single instances but bags composed of one or more instances in MIL, so the computational complexity is often very high. Previous methods consider this issue only from the perspective of instance selection not from that of example selection. In this paper, we try to address this issue via combining example selection with instance selection. Three general example selection methods are derived by adapting three immune-inspired algorithms to MIL. Additionally, we propose a simple instance selection method for MIL based on the probability that an instance is positive given a set of negative instances. Our example selection methods are combined with the new MIL method and other previous instance selection-based ones as a preprocessing step. The theoretical analysis and empirical results show that our MIL method is competitive to the state-of-the-art and the proposed example selection methods could significantly speed up various instance selection-based MIL methods with slightly weakening their performance or even strengthening it.