A generalization of Sauer's lemma
Journal of Combinatorial Theory Series A
Machine Learning
Solving the multiple instance problem with axis-parallel rectangles
Artificial Intelligence
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Scale-sensitive dimensions, uniform convergence, and learnability
Journal of the ACM (JACM)
Machine Learning - Special issue on the ninth annual conference on computational theory (COLT '96)
A Note on Learning from Multiple-Instance Examples
Machine Learning - Special issue on the ninth annual conference on computational theory (COLT '96)
A framework for multiple-instance learning
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Approximating hyper-rectangles: learning and pseudorandom sets
Journal of Computer and System Sciences - Fourteenth ACM SIGACT-SIGMOD-SIGART symposium on principles of database systems
Improved Boosting Algorithms Using Confidence-rated Predictions
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Multiple-Instance Learning for Natural Scene Classification
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Attribute-Value Learning Versus Inductive Logic Programming: The Missing Links (Extended Abstract)
ILP '98 Proceedings of the 8th International Workshop on Inductive Logic Programming
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Multi-Instance Learning Based Web Mining
Applied Intelligence
Efficient Margin Maximizing with Boosting
The Journal of Machine Learning Research
Learning from ambiguous examples
Learning from ambiguous examples
Scale-sensitive dimensions, uniform convergence, and learnability
SFCS '93 Proceedings of the 1993 IEEE 34th Annual Foundations of Computer Science
Covering numbers for real-valued function classes
IEEE Transactions on Information Theory
Rademacher averages and phase transitions in Glivenko-Cantelli classes
IEEE Transactions on Information Theory
Hi-index | 0.00 |
In the supervised learning setting termed Multiple-Instance Learning (MIL), the examples are bags of instances, and the bag label is a function of the labels of its instances. Typically, this function is the Boolean OR. The learner observes a sample of bags and the bag labels, but not the instance labels that determine the bag labels. The learner is then required to emit a classification rule for bags based on the sample. MIL has numerous applications, and many heuristic algorithms have been used successfully on this problem, each adapted to specific settings or applications. In this work we provide a unified theoretical analysis for MIL, which holds for any underlying hypothesis class, regardless of a specific application or problem domain. We show that the sample complexity of MIL is only poly-logarithmically dependent on the size of the bag, for any underlying hypothesis class. In addition, we introduce a new PAC-learning algorithm for MIL, which uses a regular supervised learning algorithm as an oracle. We prove that efficient PAC-learning for MIL can be generated from any efficient non-MIL supervised learning algorithm that handles one-sided error. The computational complexity of the resulting algorithm is only polynomially dependent on the bag size.