Rules of encounter: designing conventions for automated negotiation among computers
Rules of encounter: designing conventions for automated negotiation among computers
Bayesian learning in negotiation
International Journal of Human-Computer Studies - Evolution and learning in multiagent systems
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
Bargaining theory with applications
Bargaining theory with applications
Introduction to Multiagent Systems
Introduction to Multiagent Systems
Determining Successful Negotiation Strategies: An Evolutionary Approach
ICMAS '98 Proceedings of the 3rd International Conference on Multi Agent Systems
Agent-mediated electronic commerce: a survey
The Knowledge Engineering Review
Bargaining with incomplete information
Annals of Mathematics and Artificial Intelligence
Learning opponents' preferences in multi-object automated negotiation
ICEC '05 Proceedings of the 7th international conference on Electronic commerce
An evolutionary learning approach for adaptive negotiation agents: Research Articles
International Journal of Intelligent Systems - Learning Approaches for Negotiation Agents and Automated Negotiation
A survey of bargaining models for grid resource allocation
ACM SIGecom Exchanges
A Relaxed-Criteria Bargaining Protocol for Grid Resource Management
CCGRID '06 Proceedings of the Sixth IEEE International Symposium on Cluster Computing and the Grid
Relaxed-criteria G-negotiation for Grid resource co-allocation
ACM SIGecom Exchanges
Agent behaviors in virtual negotiation environments
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
A weighted sum genetic algorithm to support multiple-partymultiple-objective negotiations
IEEE Transactions on Evolutionary Computation
Agents that react to changing market situations
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Equilibria, prudent Compromises,and the "Waiting" game
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Grid Commerce, Market-Driven G-Negotiation, and Grid Resource Management
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Recourse-based facility-location problems in hybrid uncertain environment
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on gait analysis
Hi-index | 0.00 |
Automated negotiation provides a means for resolving differences among interacting agents. For negotiation with complete information, this paper provides mathematical proofs to show that an agent's optimal strategy can be computed using its opponent's reserve price (RP) and deadline. The impetus of this work is using the synergy of Bayesian learning (BL) and genetic algorithm (GA) to determine an agent's optimal strategy in negotiation (N) with incomplete information. BLGAN adopts: 1) BL and a deadline-estimation process for estimating an opponent's RP and deadline and 2) GA for generating a proposal at each negotiation round. Learning the RP and deadline of an opponent enables the GA in BLGAN to reduce the size of its search space (SP) by adaptively focusing its search on a specific region in the space of all possible proposals. SP is dynamically defined as a region around an agent's proposal P at each negotiation round. P is generated using the agent's optimal strategy determined using its estimations of its opponent's RP and deadline. Hence, the GA in BLGAN is more likely to generate proposals that are closer to the proposal generated by the optimal strategy. Using GA to search around a proposal generated by its current strategy, an agent in BLGAN compensates for possible errors in estimating its opponent's RP and deadline. Empirical results show that agents adopting BLGAN reached agreements successfully, and achieved: 1) higher utilities and better combined negotiation outcomes (CNOs) than agents that only adopt GA to generate their proposals, 2) higher utilities than agents that adopt BL to learn only RP, and 3) higher utilities and better CNOs than agents that do not learn their opponents' RPs and deadlines.