A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Multiclass learning, boosting, and error-correcting codes
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Using output codes to boost multiclass learning problems
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Boosting Mixture Models for Semi-supervised Learning
ICANN '01 Proceedings of the International Conference on Artificial Neural Networks
MadaBoost: A Modification of AdaBoost
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
Logistic Regression, AdaBoost and Bregman Distances
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
Reducing multiclass to binary: a unifying approach for margin classifiers
The Journal of Machine Learning Research
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Information geometry of U-Boost and Bregman divergence
Neural Computation
FloatBoost Learning and Statistical Face Detection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multiclass boosting with repartitioning
ICML '06 Proceedings of the 23rd international conference on Machine learning
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Robust Loss Functions for Boosting
Neural Computation
Solving multiclass learning problems via error-correcting output codes
Journal of Artificial Intelligence Research
SemiBoost: Boosting for Semi-Supervised Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
The purpose of this study is to incorporate prior knowledge into a boosting algorithm. Existing approaches require additional samples that represent the prior knowledge. Moreover, in order to adjust the balance between the information in training samples and the prior knowledge in the data domain, one needs to repeat the boosting algorithm with a different regularization parameter. These properties lead to costly computation. In this paper, we propose a boosting algorithm with prior knowledge that avoids computational issues. In our method, the mixture distribution of the estimator and prior knowledge is considered. We describe numerical experiments showing the effectiveness of our approach.