A novel method for coevolving PS-optimizing negotiation strategies using improved diversity controlling EDAs

  • Authors:
  • Jeonghwan Gwak;Kwang Mong Sim

  • Affiliations:
  • School of Information and Mechatronics, Gwangju Institute of Science and Technology, Gwangju, Republic of Korea 500-712;School of Computing, The University of Kent, Chatham, UK ME4 4AG

  • Venue:
  • Applied Intelligence
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

In agent-mediated negotiation systems, the majority of the research focused on finding negotiation strategies for optimizing price only. However, in negotiation systems with time constraints (e.g., resource negotiations for Grid and Cloud computing), it is crucial to optimize either or both price and negotiation speed based on preferences of participants for improving efficiency and increasing utilization. To this end, this work presents the design and implementation of negotiation agents that can optimize both price and negotiation speed (for the given preference settings of these parameters) under a negotiation setting of complete information. Then, to support negotiations with incomplete information, this work deals with the problem of finding effective negotiation strategies of agents by using coevolutionary learning, which results in optimal negotiation outcomes. In the coevolutionary learning method used here, two types of estimation of distribution algorithms (EDAs) such as conventional EDAs (S-EDAs) and novel improved dynamic diversity controlling EDAs (ID2C-EDAs) were adopted for comparative studies. A series of experiments were conducted to evaluate the performance for coevolving effective negotiation strategies using the EDAs. In the experiments, each agent adopts three representative preference criteria: (1) placing more emphasis on optimizing more price, (2) placing equal emphasis on optimizing exact price and speed and (3) placing more emphasis on optimizing more speed. Experimental results demonstrate the effectiveness of the coevolutionary learning adopting ID2C-EDAs because it generally coevolved effective converged negotiation strategies (close to the optimum) while the coevolutionary learning adopting S-EDAs often failed to coevolve such strategies within a reasonable number of generations.