Relational preference rules for control

  • Authors:
  • Ronen I. Brafman

  • Affiliations:
  • Department of Computer Science, Ben-Gurion University, PO Box 653, Beer-Sheva 84105, Israel

  • Venue:
  • Artificial Intelligence
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Value functions are defined over a fixed set of outcomes. In work on preference handling in AI, these outcomes are usually a set of assignments over a fixed set of state variables. If the set of variables changes, a new value function must be elicited. Given that in most applications the state variables are properties (attributes) of objects in the world, this implies that the introduction of new objects requires re-elicitation of preferences. However, often, the user has in mind preferential information that is much more generic, and which is relevant to a given type of domain regardless of the precise number of objects of each kind and their properties. Such information requires the introduction of relational models. Following in the footsteps of work on probabilistic relational models (PRMs), we suggest in this work a rule-based, relational language of preferences. This language extends regular rule-based languages and leads to a much more flexible approach for specifying control rules for autonomous systems. It also extends standard generalized-additive value functions to handle a dynamic universe of objects. Given any specific set of objects this specification induces a generalized-additive value function over assignments to the controllable attributes associated with these objects. We then describe a prototype of a decision support system for command and control centers we developed to illustrate and study the use of these rules.