Solving the multiple instance problem with axis-parallel rectangles
Artificial Intelligence
A framework for multiple-instance learning
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Solving the Multiple-Instance Problem: A Lazy Learning Approach
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Multiple instance learning with generalized support vector machines
Eighteenth national conference on Artificial intelligence
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Supervised versus multiple instance learning: an empirical comparison
ICML '05 Proceedings of the 22nd international conference on Machine learning
MILES: Multiple-Instance Learning via Embedded Instance Selection
IEEE Transactions on Pattern Analysis and Machine Intelligence
On the Choice of Smoothing Parameters for Parzen Estimators of Probability Density Functions
IEEE Transactions on Computers
AUC: a better measure than accuracy in comparing learning algorithms
AI'03 Proceedings of the 16th Canadian society for computational studies of intelligence conference on Advances in artificial intelligence
Artificial Intelligence in Medicine
ROC analysis of classifiers in machine learning: A survey
Intelligent Data Analysis
Hi-index | 0.00 |
In Multiple Instance Learning (MIL) problems, objects are represented by a set of feature vectors, in contrast to the standard pattern recognition problems, where objects are represented by a single feature vector. Numerous classifiers have been proposed to solve this type of MIL classification problem. Unfortunately only two datasets are standard in this field (MUSK-1 and MUSK-2), and all classifiers are evaluated on these datasets using the standard classification error. In practice it is very informative to investigate their learning curves, i.e. the performance on train and test set for varying number of training objects. This paper offers an evaluation of several classifiers on the standard datasets MUSK-1 and MUSK-2 as a function of the training size. This suggests that for smaller datasets a Parzen density estimator may be preferrer over the other 'optimal' classifiers given in the literature.