Competitive Markov decision processes
Competitive Markov decision processes
Modeling and analysis of stochastic systems
Modeling and analysis of stochastic systems
Bayesian learning in negotiation
International Journal of Human-Computer Studies - Evolution and learning in multiagent systems
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Multiagent Reinforcement Learning: Theoretical Framework and an Algorithm
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Best-Response Multiagent Learning in Non-Stationary Environments
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 2
An Adaptive Bilateral Negotiation Model for E-Commerce Settings
CEC '05 Proceedings of the Seventh IEEE International Conference on E-Commerce Technology
Opponent modelling in automated multi-issue negotiation using Bayesian learning
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Tasks for agent-based negotiation teams: Analysis, review, and challenges
Engineering Applications of Artificial Intelligence
Bilateral single-issue negotiation model considering nonlinear utility and time constraint
Decision Support Systems
Hi-index | 0.00 |
We adopt the Markov chain framework to model bilateral negotiations among agents in dynamic environments and use Bayesian learning to enable them to learn an optimal strategy in incomplete information settings. Specifically, an agent learns the optimal strategy to play against an opponent whose strategy varies with time, assuming no prior information about its negotiation parameters. In so doing, we present a new framework for adaptive negotiation in such non-stationary environments and develop a novel learning algorithm, which is guaranteed to converge, that an agent can use to negotiate optimally over time. We have implemented our algorithm and shown that it converges quickly in a wide range of cases.