Linear Programming Boosting via Column Generation
Machine Learning
Boosting Chain Learning for Object Detection
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Robust Real-Time Face Detection
International Journal of Computer Vision
Boosting Nested Cascade Detector for Multi-View Face Detection
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 2 - Volume 02
Histograms of Oriented Gradients for Human Detection
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Feature combination using boosting
Pattern Recognition Letters
Fast Human Detection Using a Cascade of Histograms of Oriented Gradients
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
On the Design of Cascades of Boosted Ensembles for Face Detection
International Journal of Computer Vision
Boosting recombined weak classifiers
Pattern Recognition Letters
Selecting features for object detection using an AdaBoost-compatible evaluation function
Pattern Recognition Letters
On the Dual Formulation of Boosting Algorithms
IEEE Transactions on Pattern Analysis and Machine Intelligence
Asymmetric totally-corrective boosting for real-time object detection
ACCV'10 Proceedings of the 10th Asian conference on Computer vision - Volume Part I
Hi-index | 0.10 |
We study the problem of information recycling in Boosting cascade visual-object detectors. It is believed that information obtained in the earlier stages of the cascade detector is also beneficial for the later stages, and that a more efficient detector can be constructed by recycling the existing information. In this work, we propose a biased selection strategy that promotes re-using existing information when selecting weak classifiers or features in each Boosting iteration. The strategy used can be interpreted as introducing a cardinality-based cost term to the Boosting loss function, and we solve the learning problem in a step-wise manner, similar to the gradient-Boosting scheme. Our work provides an alternative to the popular sparsity-inducing norms in solving such problems. Experimental results show that our method is superior to the existing methods.