Mapping a complex temporal problem into a combination of static and dynamic neural networks

  • Authors:
  • Thierry Catfolis

  • Affiliations:
  • -

  • Venue:
  • ACM SIGART Bulletin
  • Year:
  • 1994

Quantified Score

Hi-index 0.00

Visualization

Abstract

Until recently, time related artificial intelligence problems were considered difficult to tackle, and the element time was often eliminated from the core problem. Only during the last decades, researchers (Decortis & Cacciabue, [4]; Klopf & Morgan, [7]) started to explore the importance of time dependencies in artificial intelligence systems. Two different methods - 'time windows' or 'time buffers' and 'dynamic systems' - were experimented and improved in classic artificial intelligence problems like expert systems (Malkoff, [9]).The next step was to apply these two methods to artificial neural network algorithms. Initially, these algorithms (like back-propagation) used a 'time window' approach (Levin et al., [8]; Chakraborty et al., [3]) but recently dynamic network algorithms were developed (Hirsch, [6]; Reiss & Taylor, [10]; Schmidhuber, [13]; Williams & Zipser, [14]).We explain the advantages of these algorithms and the problems that occur due to their computational requirements. We introduce a method for lowering this requirement by splitting a temporal task into a (smaller) temporal part and a static or non-temporal part. By doing so we obtain the advantages of both methods: the inherent implementation of unknown time dependencies with the dynamic neural network and the low computational effort of the static neural network. We demonstrate this approach on a simple diagnosis problem.