Theories for mutagenicity: a study in first-order and feature-based induction
Artificial Intelligence - Special volume on empirical methods
Boosting in the limit: maximizing the margin of learned ensembles
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
A simple, fast, and effective rule learner
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
ACM SIGKDD Explorations Newsletter
ECML '93 Proceedings of the European Conference on Machine Learning
Feature Construction with Version Spaces for Biochemical Applications
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Contribution of Boosting in Wrapper Models
PKDD '99 Proceedings of the Third European Conference on Principles of Data Mining and Knowledge Discovery
On the Stability of Example-Driven Learning Systems: A Case Study in Multirelational Learning
MICAI '02 Proceedings of the Second Mexican International Conference on Artificial Intelligence: Advances in Artificial Intelligence
Theoretical Views of Boosting and Applications
ALT '99 Proceedings of the 10th International Conference on Algorithmic Learning Theory
Relational Learning Using Constrained Confidence-Rated Boosting
ILP '01 Proceedings of the 11th International Conference on Inductive Logic Programming
Demand-Driven Construction of Structural Features in ILP
ILP '01 Proceedings of the 11th International Conference on Inductive Logic Programming
Transformation-Based Learning Using Multirelational Aggregation
ILP '01 Proceedings of the 11th International Conference on Inductive Logic Programming
ALT '96 Proceedings of the 7th International Workshop on Algorithmic Learning Theory
A study of distance-based machine learning algorithms
A study of distance-based machine learning algorithms
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Self-Organizing Hidden Markov Model Map (SOHMMM)
Neural Networks
Hi-index | 0.00 |
Boosting is well known to increase the accuracy of propositional and multi-relational classification learners. However, the base learner's efficiency vitally determines boosting's efficiency since the complexity of the underlying learner is amplified by iterated calls of the learner in the boosting framework. The idea of restricting the learner to smaller feature subsets in order to increase efficiency is widely used. Surprisingly, little attention has been paid so far to exploiting characteristics of boosting itself to include features based on the current learning progress. In this paper, we show that the dynamics inherent to boosting offer ideal means to maximize the efficiency of the learning process. We describe how to utilize the training examples' margins - which are known to be maximized by boosting - to reduce learning times without a deterioration of the learning quality. We suggest to stepwise include features in the learning process in response to a slowdown in the improvement of the margins. Experimental results show that this approach significantly reduces the learning time while maintaining or even improving the predictive accuracy of the underlying fully equipped learner.