Economic dynamics of agents in multiple auctions
Proceedings of the fifth international conference on Autonomous agents
Structural leverage and fictitious play in sequential auctions
Eighteenth national conference on Artificial intelligence
SIMPLE - A Multi-Agent System for Simultaneous and Related Auctions
IAT '03 Proceedings of the IEEE/WIC International Conference on Intelligent Agent Technology
Probabilistic Automated Bidding in Multiple Auctions
Electronic Commerce Research
Efficient agents for cliff-edge environments with a large set of decision options
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Heuristic Bidding Strategies for Multiple Heterogeneous Auctions
Proceedings of the 2006 conference on ECAI 2006: 17th European Conference on Artificial Intelligence August 29 -- September 1, 2006, Riva del Garda, Italy
Modeling human decision making in cliff-edge environments
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Efficient bidding strategies for Cliff-Edge problems
Autonomous Agents and Multi-Agent Systems
Hi-index | 0.00 |
This paper proposes an efficient agent for competing in simultaneous substitutional Cliff-Edge (SCE) environments, which include simultaneous auctions and multiplayer Ultimatum-Games. The agent competes in one-shot interactions repeatedly, each time against different human opponents, and its performance is evaluated based on all the interactions in which it participates. It learns the general pattern of the population's behavior and does not apply any examples of previous interactions in the environment, neither of other competitors nor of its own. Moreover, the agent rapidly adjusts to environments comprising a large number of optional decisions at each decision point. We propose a generic approach which competes in different substitutional environments under the same configuration, with no knowledge about the specific rules of each environment. The underlying mechanism of the proposed agent is the Simultaneous Deviated Virtual Reinforcement Learning (SDVRL) algorithm, which is an extension of an algorithm for non-simultaneous environments. In addition, we propose a heuristic for improving our agent's complexity. Experiments comparing the average payoff of the proposed algorithm with other possible algorithms reveal a significant superiority of the former. In addition, our agent performs better than human competitors executing the same tasks.