Original Contribution: Stacked generalization
Neural Networks
C4.5: programs for machine learning
C4.5: programs for machine learning
Solving the multiple instance problem with axis-parallel rectangles
Artificial Intelligence
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
A framework for multiple-instance learning
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Machine Learning
Content-Based Image Retrieval Using Multiple-Instance Learning
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Multiple-Instance Learning for Natural Scene Classification
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
The Alternating Decision Tree Learning Algorithm
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Recognizing End-User Transactions in Performance Management
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
Solving the Multiple-Instance Problem: A Lazy Learning Approach
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
SVM-based generalized multiple-instance learning via approximate box counting
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Image Categorization by Learning and Reasoning with Regions
The Journal of Machine Learning Research
A Sparse Support Vector Machine Approach to Region-Based Image Categorization
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Supervised versus multiple instance learning: an empirical comparison
ICML '05 Proceedings of the 22nd international conference on Machine learning
Adapting RBF Neural Networks to Multi-Instance Learning
Neural Processing Letters
MILES: Multiple-Instance Learning via Embedded Instance Selection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Solving multi-instance problems with classifier ensemble based on constructive clustering
Knowledge and Information Systems - Special Issue on Mining Low-Quality Data
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
An Improved Multiple-Instance Learning Algorithm
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks
Multiple-Instance learning via random walk
ECML'06 Proceedings of the 17th European conference on Machine Learning
LSA based multi-instance learning algorithm for image retrieval
Signal Processing
Latent topic based multi-instance learning method for localized content-based image retrieval
Computers & Mathematics with Applications
Hi-index | 0.00 |
Multiple-instance learning (MIL) is a generalization of the supervised learning problem where each training observation is a labeled bag of unlabeled instances. Several supervised learning algorithms have been successfully adapted for the multiple-instance learning settings. We explore the adaptation of the Naive Bayes (NB) classifier and the utilization of its sufficient statistics for developing novel multiple-instance learning methods. Specifically, we introduce MICCLLR (multiple-instance class conditional log likelihood ratio), a method for mapping each bag of instances as a single meta-instance using class conditional log likelihood ratio statistics such that any supervised base classifier can be applied to the meta-data. The results of our experiments with MICCLLR using different base classifiers suggest that no single base classifier consistently outperforms other base classifiers on all data sets. We show that a substantial improvement in performance is obtained using an ensemble of MICCLLR classifiers trained using different base learners. We also show that an extra gain in classification accuracy is obtained by applying AdaBoost.M1 to weak MICCLLR classifiers. Overall, our results suggest that the predictive performance of the three proposed variants of MICCLLR are competitive to some of the state-of-the-art MIL methods.