COLT '96 Proceedings of the ninth annual conference on Computational learning theory
Solving the multiple instance problem with axis-parallel rectangles
Artificial Intelligence
A Note on Learning from Multiple-Instance Examples
Machine Learning - Special issue on the ninth annual conference on computational theory (COLT '96)
A framework for multiple-instance learning
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
On Learning From Multi-Instance Examples: Empirical Evaluation of a Theoretical Approach
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
AI '01 Proceedings of the 14th Biennial Conference of the Canadian Society on Computational Studies of Intelligence: Advances in Artificial Intelligence
Solving the Multiple-Instance Problem: A Lazy Learning Approach
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
SVM-based generalized multiple-instance learning via approximate box counting
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Adapting RBF Neural Networks to Multi-Instance Learning
Neural Processing Letters
A Novel Neural Network-Based Approach for Multiple Instance Learning
CIT '10 Proceedings of the 2010 10th IEEE International Conference on Computer and Information Technology
Adaptive kernel diverse density estimate for multiple instance learning
MLDM'11 Proceedings of the 7th international conference on Machine learning and data mining in pattern recognition
Hi-index | 0.00 |
Multiple instance learning, when instances are grouped into bags, concerns learning of a target concept from the bags without reference to their instances. In this paper, we advance the problem with a novel method based on computing the partial entropy involving only the positive bags using a partial probability scheme in the attribute subspace. The evaluation highlights what could be obtained if information only from the positive bags is used, while the contributions from the negative bags are identified. The proposed method attempts to relax the dependency on the distribution of the whole probability of training data, but focus only on the selected subspace. Experimental evaluation explores the effectiveness of using maximum partial entropy in evaluating the merits between the positive and negative bags in the learning.