Combining Uncertainty Sampling methods for supporting the generation of meta-examples

  • Authors:
  • Ricardo B. C. Prudêncio;Teresa B. Ludermir

  • Affiliations:
  • Centro de Informática, Universidade Federal de Pernambuco, Recife, Brazil;Centro de Informática, Universidade Federal de Pernambuco, Recife, Brazil

  • Venue:
  • Information Sciences: an International Journal
  • Year:
  • 2012

Quantified Score

Hi-index 0.07

Visualization

Abstract

Meta-Learning aims to automatically acquire knowledge relating features of learning problems to the performance of learning algorithms. Each training example in Meta-Learning (i.e. each meta-example) stores features of a learning problem plus the performance obtained by a set of algorithms when evaluated on the problem. Based on a set of meta-examples, a Meta-Learner will be used to predict algorithm performance for new problems. The generation of a good set of meta-examples can be a costly process, since for each problem it is necessary to perform an empirical evaluation of the algorithms. In a previous work, we proposed the Active Meta-Learning, in which Active Learning was used to reduce the set of meta-examples by selecting only the most relevant problems for meta-example generation. In the current work, we extend our previous research by combining different Uncertainty Sampling methods for Active Meta-Learning, considering that each individual method will provide useful information to select relevant problems. We also investigated the use of Outlier Detection to remove a priori those problems considered as outliers, aiming to improve the performance of the sampling methods. In our experiments, we observed a gain in Meta-Learning performance when the proposed combining method was compared to the individual active methods being combined and also when outliers were removed from the set of problems available for meta-example generation.