Learning other agents' preferences in multiagent negotiation

  • Authors:
  • H. H. Bui;D. Kieronska;S. Venkatesh

  • Affiliations:
  • Department of Computer Science, Curtin University of Technology, Perth, WA, Australia;Department of Computer Science, Curtin University of Technology, Perth, WA, Australia;Department of Computer Science, Curtin University of Technology, Perth, WA, Australia

  • Venue:
  • AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

In multiagent systems, an agent does not usually have complete information about the preferences and decision making processes of other agents. This might prevent the agents from making coordinated choices, purely due to their ignorance of what others want. This paper describes the integration of a learning module into a communication-intensive negotiating agent architecture. The learning module gives the agents the ability to learn about other agents' preferences via past interactions. Over time, the agents can incrementally update their models of other agents' preferences and use them to make better coordinated decisions. Combining both communication and learning, as two complement knowledge acquisition methods, helps to reduce the amount of communication needed on average, and is justified in situations where communication is computationally costly or simply not desirable (e.g. to preserve the individual privacy).