Learning to detect objects of many classes using binary classifiers
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
Learning a dynamic classification method to detect faces and identify facial expression
AMFG'05 Proceedings of the Second international conference on Analysis and Modelling of Faces and Gestures
A binary decision tree implementation of a boosted strong classifier
AMFG'05 Proceedings of the Second international conference on Analysis and Modelling of Faces and Gestures
Using a boosted tree classifier for text segmentation in hand-annotated documents
Pattern Recognition Letters
International Journal of Computer Vision
Performance divergence with data discrepancy: a review
Artificial Intelligence Review
Hi-index | 0.00 |
We present a boosting method that results in a decision tree rather than a fixed linear sequence of classifiers. An equally correct statement is that we present a tree-growing method whose performance can be analysed in the framework of Adaboost. We argue that Adaboost can be improved by presenting the input to a sequence of weak classifiers, each one tuned to the conditional probability determined by the output of previous weak classifiers.As a result, the final classifier has a tree structure, rather than being linear, thus the name "Adatree".One of the consequences of the tree structure is that different input data may have different processing time.Early experimentation shows a reduced computation cost with respect to Adaboost. One of our intended applications is real-time detection, where cascades of boosted detectors have recently become successful.The reduced computation cost of the proposed method shows some potential for being used directly in detection problems, without need of a cascade.