BLGAN: Bayesian learning and genetic algorithm for supporting negotiation with incomplete information

  • Authors:
  • Kwang Mong Sim;Yuanyuan Guo;Benyun Shi

  • Affiliations:
  • Department of Information and Communications, Gwangju Institute of Science and Technology, Gwangju, Korea and Department of Computer Science, Hong Kong Baptist University, Kowloon Tong, Hong Kong;Department of Computer Science, University New Brunswick, Saint John, NB, Canada;Department of Information and Communications, Gwangju Institute of Science and Technology, Gwangju, Korea and Department of Computer Science, Hong Kong Baptist University, Kowloon Tong, Hong Kong

  • Venue:
  • IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on human computing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automated negotiation provides a means for resolving differences among interacting agents. For negotiation with complete information, this paper provides mathematical proofs to show that an agent's optimal strategy can be computed using its opponent's reserve price (RP) and deadline. The impetus of this work is using the synergy of Bayesian learning (BL) and genetic algorithm (GA) to determine an agent's optimal strategy in negotiation (N) with incomplete information. BLGAN adopts: 1) BL and a deadline-estimation process for estimating an opponent's RP and deadline and 2) GA for generating a proposal at each negotiation round. Learning the RP and deadline of an opponent enables the GA in BLGAN to reduce the size of its search space (SP) by adaptively focusing its search on a specific region in the space of all possible proposals. SP is dynamically defined as a region around an agent's proposal P at each negotiation round. P is generated using the agent's optimal strategy determined using its estimations of its opponent's RP and deadline. Hence, the GA in BLGAN is more likely to generate proposals that are closer to the proposal generated by the optimal strategy. Using GA to search around a proposal generated by its current strategy, an agent in BLGAN compensates for possible errors in estimating its opponent's RP and deadline. Empirical results show that agents adopting BLGAN reached agreements successfully, and achieved: 1) higher utilities and better combined negotiation outcomes (CNOs) than agents that only adopt GA to generate their proposals, 2) higher utilities than agents that adopt BL to learn only RP, and 3) higher utilities and better CNOs than agents that do not learn their opponents' RPs and deadlines.