Lag-Dependent Regularization for MLPs Applied to Financial Time Series Forecasting Tasks

  • Authors:
  • Andrew Skabar

  • Affiliations:
  • Department of Computer Science and Computer Engineering, La Trobe University, Victoria, Australia 3086

  • Venue:
  • ICCS 2009 Proceedings of the 9th International Conference on Computational Science
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The application of multilayer perceptrons to forecasting the future value of some time series based on past (or lagged ) values of the time series usually requires very careful selection of the number of lags to be used as inputs, and this must usually be determined empirically. This paper proposes a regularization technique by which the influence that a lag has in determining the forecast value decreases exponentially with the lag, and is consistent with the intuitive notion that recent values should have more influence than less recent values in predicting future values. This means that in principle an infinite number of dimensions could be used. Empirical results show that the regularization technique yields superior performance on out-of-sample data compared with approaches that use a fixed number of inputs without lag-dependent regularization.