The Strength of Weak Learnability
Machine Learning
Hierarchical mixtures of experts and the EM algorithm
Neural Computation
Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Boosting classifiers regionally
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Machine Learning
iBoost: Boosting Using an i nstance-Based Exponential Weighting Scheme
ECML '02 Proceedings of the 13th European Conference on Machine Learning
A local boosting algorithm for solving classification problems
Computational Statistics & Data Analysis
Issues in stacked generalization
Journal of Artificial Intelligence Research
Dynamic integration with random forests
ECML'06 Proceedings of the 17th European conference on Machine Learning
Hi-index | 0.00 |
RegionBoost is one of the classical examples of Boosting with dynamic weighting schemes. Apart from its demonstrated superior performance on a variety of classification problems, relatively little effort has been devoted to the detailed analysis of its convergence behavior. This paper presents some results from a preliminary attempt towards understanding the practical convergence behavior of RegionBoost. It is shown that, in some situations, the training error of RegionBoost may not be able to converge consistently as its counterpart AdaBoost and a deep understanding of this phenomenon may greatly contribute to the improvement of RegionBoost.