Communications of the ACM
The Strength of Weak Learnability
Machine Learning
Game theory, on-line prediction and boosting
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
Improved Boosting Algorithms Using Confidence-rated Predictions
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Machine Learning
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
IEEE Transactions on Pattern Analysis and Machine Intelligence
An introduction to boosting and leveraging
Advanced lectures on machine learning
The beneficial effects of using multi-net systems that focus on hard patterns
MCS'03 Proceedings of the 4th international conference on Multiple classifier systems
Locally Trained Piecewise Linear Classifiers
IEEE Transactions on Pattern Analysis and Machine Intelligence
Moderating the outputs of support vector machine classifiers
IEEE Transactions on Neural Networks
Nonlinear kernel-based statistical pattern analysis
IEEE Transactions on Neural Networks
Designing Model Based Classifiers by Emphasizing Soft Targets
Fundamenta Informaticae - Advances in Artificial Intelligence and Applications
Edited AdaBoost by weighted kNN
Neurocomputing
Designing Model Based Classifiers by Emphasizing Soft Targets
Fundamenta Informaticae - Advances in Artificial Intelligence and Applications
Improving reliability of oil spill detection systems using boosting for high-level feature selection
ICIAR'07 Proceedings of the 4th international conference on Image Analysis and Recognition
Confidence-based multiclass AdaBoost for physical activity monitoring
Proceedings of the 2013 International Symposium on Wearable Computers
Smoothed emphasis for boosting ensembles
IWANN'13 Proceedings of the 12th international conference on Artificial Neural Networks: advances in computational intelligence - Volume Part I
Hi-index | 0.01 |
Real Adaboost is a well-known and good performance boosting method used to build machine ensembles for classification. Considering that its emphasis function can be decomposed in two factors that pay separated attention to sample errors and to their proximity to the classification border, a generalized emphasis function that combines both components by means of a selectable parameter, @l, is presented. Experiments show that simple methods of selecting @l frequently offer better performance and smaller ensembles.