A model for the dynamic coordination of multiple competing goals

  • Authors:
  • Jose Antonio Martin H.;Javier de Lope

  • Affiliations:
  • Dep. Sistemas Informaticos y Computacion, Universidad Complutense de Madrid, Madrid, Spain;Department of Applied Intelligent Systems, Universidad Politecnica de Madrid, Madrid, Spain

  • Venue:
  • Journal of Experimental & Theoretical Artificial Intelligence
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

A general framework for the problem of coordination of multiple competing goals in dynamic environments for physical agents is presented. This approach to goal coordination is a novel tool to incorporate a deep coordination ability to pure reactive agents. The framework presented is based on the notion of multi-objective optimisation. In this article we propose a kind of 'aggregating functions' formulation with the particularity that the aggregation is weighted by means of a dynamic weighting unitary vector  [image omitted], which is dependent from the system dynamic state allowing the agent to dynamically coordinate the priorities of its single goals. This dynamic weighting unitary vector is represented as a (n-1) set of angles. The dynamic coordination must be established by means of a mapping between the state of the agent's environment S to the set of angles Φi(S) by means of any sort of machine-learning tool. In this work, we investigate the use of Reinforcement Learning as a first approach to learn that mapping.