Market Performance of Adaptive Trading Agents in Synchronous Double Auctions

  • Authors:
  • Wei-Tek Hsu;Von-Wun Soo

  • Affiliations:
  • -;-

  • Venue:
  • PRIMA 2001 Proceedings of the 4th Pacific Rim International Workshop on Multi-Agents, Intelligent Agents: Specification, Modeling, and Applications
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

We are concerned with the issues on designing adaptive trading agents to learn bidding strategies in electronic market places. The synchronous double auction is used as a simulation testbed. We implemented agents with neural-network-based reinforcement learning called Q-learning agents (QLA) to learn bidding strategies in the double auctions. In order to compare the performances of QLAs in the electronic market places, we also implemented many kinds of non-adaptive trading agents such as simple random bidding agents (SRBA), gradient-based greedy agent (GBGA), and truth telling agent (TTA). Instead of learning to model other trading agents that is computational intractable, we designed learning agents to model the market environment as a whole instead. Our experimental results showed that in terms of global market efficiency, QLAs could outperform TTAs and GBGAs but could not outperform SRBAs in the market of homogeneous type of agents. In terms of individual performance, QLAs could outperform all three non-adaptive trading agents when the opponents they are dealing with in the market place are a purely homogeneous type of non-adaptive trading agents. However, QLAs could only outperform TTAs and GBGAs and could not outperform SRBAs in the market of heterogeneous types of agents.