Floating search methods in feature selection
Pattern Recognition Letters
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Improved Boosting Algorithms Using Confidence-rated Predictions
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Machine Learning
Diversity versus Quality in Classification Ensembles Based on Feature Selection
ECML '00 Proceedings of the 11th European Conference on Machine Learning
Boosting the margin: A new explanation for the effectiveness of voting methods
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Improved Uniformity Enforcement in Stochastic Discrimination
MCS '09 Proceedings of the 8th International Workshop on Multiple Classifier Systems
BISAR: boosted input selection algorithm for regression
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Combining bagging, boosting, rotation forest and random subspace methods
Artificial Intelligence Review
Robust Video Content Analysis via Transductive Learning
ACM Transactions on Intelligent Systems and Technology (TIST)
Integrating global and local application of random subspace ensemble
Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology
Hi-index | 0.00 |
It is possible to reduce the error rate of a single classifier using a classifier ensemble. However, any gain in performance is undermined by the increased computation of performing classification several times. Here the AdaboostFS algorithm is proposed which builds on two popular areas of ensemble research: Adaboost and Ensemble Feature Selection (EFS). The aim of AdaboostFS is to reduce the number of features used by each base classifer and hence the overall computation required by the ensemble. To do this the algorithm combines a regularised version of Boosting AdaboostReg [1] with a floating feature search for each base classifier. AdaboostFS is compared using four benchmark data sets to AdaboostAll, which uses all features and to AdaboostRSM, which uses a random selection of features. Performance is assessed based on error rate, ensemble error and diversity, and the total number of features used for classification. Results show that AdaboostFS achieves a lower error rate and higher diversity than AdaboostAll, and achieves a lower error rate and comparable diversity to AdaboostRSM. However, over the other methods AdaboostFS produces a significant reduction in the number of features required for classification in each base classifier and the entire ensemble.