Democratic approximation of lexicographic preference models

  • Authors:
  • Fusun Yaman;Thomas J. Walsh;Michael L. Littman;Marie desJardins

  • Affiliations:
  • BBN Technologies, 10 Moulton St., Cambridge, MA 02138, USA;University of Arizona, Department of Computer Science, Tucson, AZ 85721, USA;Rutgers University, Department of Computer Science, Piscataway, NJ 08854, USA;University of Maryland Baltimore County, Computer Science and Electrical Engineering Department, Baltimore, MD 21250, USA

  • Venue:
  • Artificial Intelligence
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Lexicographic preference models (LPMs) are an intuitive representation that corresponds to many real-world preferences exhibited by human decision makers. Previous algorithms for learning LPMs produce a ''best guess'' LPM that is consistent with the observations. Our approach is more democratic: we do not commit to a single LPM. Instead, we approximate the target using the votes of a collection of consistent LPMs. We present two variations of this method-variable voting and model voting-and empirically show that these democratic algorithms outperform the existing methods. Versions of these democratic algorithms are presented in both the case where the preferred values of attributes are known and the case where they are unknown. We also introduce an intuitive yet powerful form of background knowledge to prune some of the possible LPMs. We demonstrate how this background knowledge can be incorporated into variable and model voting and show that doing so improves performance significantly, especially when the number of observations is small.