Preference generation for autonomous agents

  • Authors:
  • Umair Rafique;Shell Ying Huang

  • Affiliations:
  • School of Computer Engineering, Nanyang Technological University, Singapore;School of Computer Engineering, Nanyang Technological University, Singapore

  • Venue:
  • MATES'10 Proceedings of the 8th German conference on Multiagent system technologies
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

An intelligent agent situated in an environment needs to know the preferred states it is expected to achieve or maintain so that it can work towards achieving or maintaining them. We refer to all these preferred states as "preferences". The preferences an agent has selected to bring about at a given time are called "goals". This selection of preferences as goals is generally referred to as "goal generation". Basic aim behind goal generation is to provide the agent with a way of getting new goals. Although goal generation results in an increase in the agent's knowledge about its goals, the overall autonomy of the agent does not increase as its goals are derived from its preferences (which are programmed). We argue that to achieve greater autonomy, an agent must be able to generate new preferences. In this paper we discuss how an agent can generate new preferences based on analogy between new objects and the objects it has known preferences for.