C4.5: programs for machine learning
C4.5: programs for machine learning
Machine Learning
Solving the multiple instance problem with axis-parallel rectangles
Artificial Intelligence
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
A framework for multiple-instance learning
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Machine Learning
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Solving the Multiple-Instance Problem: A Lazy Learning Approach
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Inference for the Generalization Error
Machine Learning
Generic Object Recognition with Boosting
IEEE Transactions on Pattern Analysis and Machine Intelligence
MILES: Multiple-Instance Learning via Embedded Instance Selection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Solving multi-instance problems with classifier ensemble based on constructive clustering
Knowledge and Information Systems - Special Issue on Mining Low-Quality Data
Multi-instance clustering with applications to multi-instance prediction
Applied Intelligence
MIForests: multiple-instance learning with randomized trees
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part VI
Speeding up and boosting diverse density learning
DS'10 Proceedings of the 13th international conference on Discovery science
Beyond trees: adopting MITI to learn rules and ensemble classifiers for multi-instance data
AI'11 Proceedings of the 24th international conference on Advances in Artificial Intelligence
Hi-index | 0.00 |
Multiple-Instance Learning via Embedded Instance Selection (MILES) is a recently proposed multiple-instance (MI) classification algorithm that applies a single-instance base learner to a propositionalized version of MI data. However, the original authors consider only one single-instance base learner for the algorithm -- the 1-norm SVM. We present an empirical study investigating the efficacy of alternative base learners for MILES, and compare MILES to other MI algorithms. Our results show that boosted decision stumps can in some cases provide better classification accuracy than the 1-norm SVM as a base learner for MILES. Although MILES provides competitive performance when compared to other MI learners, we identify simpler propositionalization methods that require shorter training times while retaining MILES' strong classification performance on the datasets we tested.