Machine Learning
Solving the multiple instance problem with axis-parallel rectangles
Artificial Intelligence
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Learning Object Categories from Google"s Image Search
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
MILES: Multiple-Instance Learning via Embedded Instance Selection
IEEE Transactions on Pattern Analysis and Machine Intelligence
MIForests: multiple-instance learning with randomized trees
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part VI
Harvesting Image Databases from the Web
IEEE Transactions on Pattern Analysis and Machine Intelligence
MILIS: Multiple Instance Learning with Instance Selection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Robust Object Tracking with Online Multiple Instance Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Text-based image retrieval using progressive multi-instance learning
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Beyond bounding-boxes: learning object shape by model-driven grouping
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part III
Multiple instance learning via Gaussian processes
Data Mining and Knowledge Discovery
Hi-index | 0.00 |
Multiple-instance learning consists of two alternating optimization steps: learning a classifier with missing labels and finding the missing labels with the classifier. These steps are iteratively performed on the same training data, thus imputing labels by evaluating the classifier on the data it is trained upon. Consequently this alternating optimization is prone to self-amplification and overfitting. To resolve this crucial issue of popular multiple-instance learning we propose to establish a random ensemble of sets of bags, i.e., superbags. Classifier training and label inference are then decoupled by performing them on different superbags. Label inference is performed on samples from separate superbags, and thus avoids label imputation on training samples in the same superbag. Experimental evaluations on standard datasets show consistent improvement over widely used approaches for multiple-instance learning.