A framework for multiple-instance learning
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Learning from Labeled and Unlabeled Data using Graph Mincuts
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Robust Real-Time Face Detection
International Journal of Computer Vision
Semi-Supervised Self-Training of Object Detection Models
WACV-MOTION '05 Proceedings of the Seventh IEEE Workshops on Application of Computer Vision (WACV/MOTION'05) - Volume 1 - Volume 01
Multiple instance learning for labeling faces in broadcasting news video
Proceedings of the 13th annual ACM international conference on Multimedia
Localized content-based image retrieval using semi-supervised multiple instance learning
ACCV'07 Proceedings of the 8th Asian conference on Computer vision - Volume Part I
Efficient graph-based semi-supervised learning of structured tagging models
EMNLP '10 Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
Robust Object Tracking with Online Multiple Instance Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Localizing parts of faces using a consensus of exemplars
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Handling label noise in video classification via multiple instance learning
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Hi-index | 0.00 |
In this paper we propose a semi-supervised multiple instance learning based boosting algorithm for domain adaptation, with face detection as an example. Very often a generic classifier learned using a large volume of training data needs to be tuned to work for a specific scenario. However when deployed, the test scenarios may differ marginally from the training ones. For e.g. a face detection system may be deployed in an airport as well as in an auditorium hallway. The classifier then needs to adapt to the new domain. Instead of retraining the classifier completely using examples from the new scenario, it is desirable to see how much the classifier can "self-learn". Conventional self-learning algorithms consider putative positives on test data given by the base classifier, and select a subset of those based on more stringent thresholds. In this paper we propose an alternative self-learning approach which is based on the popular multiple instance learning approach which makes use of "bags" instead of single instances for training the classifier. We pool the putative positives on a given test image into a positive bag and the putative negatives into a negative bag. We augment this data to the initial training data and retrain the classifier using MILBoost. Specifically the advantage of our approach is that since it makes use of bags it is more robust to classification errors by the base classifier. We demonstrate the improvement in classification accuracy using our approach on Faces in the Wild database. We show that our approach outperforms self-learning and compares favorably with MILBoost trained on manually marked face data without the corresponding increase in labeling effort.