Threshold recurrent reinforcement learning model for automated trading

  • Authors:
  • Dietmar Maringer;Tikesh Ramtohul

  • Affiliations:
  • Universität Basel, Basel, Switzerland;Universität Basel, Basel, Switzerland

  • Venue:
  • EvoCOMNET'10 Proceedings of the 2010 international conference on Applications of Evolutionary Computation - Volume Part II
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents the threshold recurrent reinforcement learning (TRRL) model and describes its application in a simple automated trading system. The TRRL is a regime-switching extension of the recurrent reinforcement learning (RRL) algorithm. The basic RRL model was proposed by Moody and Wu (1997) and used for uncovering trading strategies. We argue that the RRL is not sufficiently equipped to capture the non-linearities and structural breaks present in financial data, and propose the TRRL model as a more suitable algorithm for such environments. This paper gives a detailed description of the TRRL and compares its performance with that of the basic RRL model in a simple automated trading framework using daily data from four well-known European indices. We assume a frictionless setting and use volatility as an indicator variable for switching between regimes. We find that the TRRL produces better trading strategies in all the cases studied, and demonstrate that it is more apt at finding structure in non-linear financial time series than the standard RRL.