FaSS: Ensembles for Stable Learners

  • Authors:
  • Kai Ming Ting;Jonathan R. Wells;Swee Chuan Tan;Shyh Wei Teng;Geoffrey I. Webb

  • Affiliations:
  • Gippsland School of Information Technology,;Gippsland School of Information Technology,;Gippsland School of Information Technology,;Gippsland School of Information Technology,;Clayton School of Information Technology, Monash University, Australia

  • Venue:
  • MCS '09 Proceedings of the 8th International Workshop on Multiple Classifier Systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper introduces a new ensemble approach, Feature-Space Subdivision (FaSS), which builds local models instead of global models. FaSS is a generic ensemble approach that can use either stable or unstable models as its base models. In contrast, existing ensemble approaches which employ randomisation can only use unstable models. Our analysis shows that the new approach reduces the execution time to generate a model in an ensemble with an increased level of localisation in FaSS. Our empirical evaluation shows that FaSS performs significantly better than boosting in terms of predictive accuracy, when a stable learner SVM is used as the base learner. The speed up achieved by FaSS makes SVM ensembles a reality that would otherwise infeasible for large data sets, and FaSS SVM performs better than Boosting J48 and Random Forests when SVM is the preferred base learner.