Reinforcement learning for active model selection

  • Authors:
  • Aloak Kapoor;Russell Greiner

  • Affiliations:
  • University of Alberta, Edmonton, AB;University of Alberta, Edmonton, AB

  • Venue:
  • UBDM '05 Proceedings of the 1st international workshop on Utility-based data mining
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In many practical Machine Learning tasks, there are costs associated with acquiring the feature values of training instances, as well as a hard learning budget which limits the number of feature values that can be purchased. In this budgeted learning scenario, it is important to use an effective "data acquisition policy", that specifies how to spend the budget acquiring training data to produce an accurate classifier. This paper examines a simplified version of this problem, "active model selection" [10]. As this is a Markov decision problem, we consider applying reinforcement learning (RL) techniques to learn an effective spending policy. Despite extensive training, our experiments on various versions of the problem show that the performance of RL techniques is inferior to existing, simpler spending policies.