Letter Recognition Using Holland-Style Adaptive Classifiers
Machine Learning
Solving the multiple instance problem with axis-parallel rectangles
Artificial Intelligence
Histograms of Oriented Gradients for Human Detection
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Object Categorization by Learned Universal Visual Dictionary
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
A regularization framework for multiple-instance learning
ICML '06 Proceedings of the 23rd international conference on Machine learning
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Pegasos: Primal Estimated sub-GrAdient SOlver for SVM
Proceedings of the 24th international conference on Machine learning
LIBLINEAR: A Library for Large Linear Classification
The Journal of Machine Learning Research
M3MIML: A Maximum Margin Method for Multi-instance Multi-label Learning
ICDM '08 Proceedings of the 2008 Eighth IEEE International Conference on Data Mining
Learning from ambiguously labeled examples
Intelligent Data Analysis - Selected papers from IDA2005, Madrid, Spain
Drosophila gene expression pattern annotation through multi-instance multi-label learning
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
The Pascal Visual Object Classes (VOC) Challenge
International Journal of Computer Vision
A New SVM Approach to Multi-instance Multi-label Learning
ICDM '10 Proceedings of the 2010 IEEE International Conference on Data Mining
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Pegasos: primal estimated sub-gradient solver for SVM
Mathematical Programming: Series A and B - Special Issue on "Optimization and Machine learning"; Alexandre d’Aspremont • Francis Bach • Inderjit S. Dhillon • Bin Yu
The Journal of Machine Learning Research
Multi-instance multi-label learning
Artificial Intelligence
Ensemble multi-instance multi-label learning approach for video annotation task
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Rank-loss support instance machines for MIML instance annotation
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Hi-index | 0.00 |
Multi-instance multi-label learning (MIML) is a framework for supervised classification where the objects to be classified are bags of instances associated with multiple labels. For example, an image can be represented as a bag of segments and associated with a list of objects it contains. Prior work on MIML has focused on predicting label sets for previously unseen bags. We instead consider the problem of predicting instance labels while learning from data labeled only at the bag level. We propose a regularized rank-loss objective designed for instance annotation, which can be instantiated with different aggregation models connecting instance-level labels with bag-level label sets. The aggregation models that we consider can be factored as a linear function of a “support instance” for each class, which is a single feature vector representing a whole bag. Hence we name our proposed methods rank-loss Support Instance Machines (SIM). We propose two optimization methods for the rank-loss objective, which is nonconvex. One is a heuristic method that alternates between updating support instances, and solving a convex problem in which the support instances are treated as constant. The other is to apply the constrained concave-convex procedure (CCCP), which can also be interpreted as iteratively updating support instances and solving a convex problem. To solve the convex problem, we employ the Pegasos framework of primal subgradient descent, and prove that it finds an ε-suboptimal solution in runtime that is linear in the number of bags, instances, and 1/ε. Additionally, we suggest a method of extending the linear learning algorithm to nonlinear classification, without increasing the runtime asymptotically. Experiments on artificial and real-world datasets including images and audio show that the proposed methods achieve higher accuracy than other loss functions used in prior work, e.g., Hamming loss, and recent work in ambiguous label classification.