Convergent activation dynamics in continuous time networks
Neural Networks
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Neural Networks
The "Moving Targets" Training Algorithm
Proceedings of the EURASIP Workshop 1990 on Neural Networks
Sequence Learning - Paradigms, Algorithms, and Applications
MIGA, A Software Tool for Nonlinear System Modelling with Modular Neural Networks
Applied Intelligence
Hi-index | 0.00 |
Until recently, time related artificial intelligence problems were considered difficult to tackle, and the element time was often eliminated from the core problem. Only during the last decades, researchers (Decortis & Cacciabue, [4]; Klopf & Morgan, [7]) started to explore the importance of time dependencies in artificial intelligence systems. Two different methods - 'time windows' or 'time buffers' and 'dynamic systems' - were experimented and improved in classic artificial intelligence problems like expert systems (Malkoff, [9]).The next step was to apply these two methods to artificial neural network algorithms. Initially, these algorithms (like back-propagation) used a 'time window' approach (Levin et al., [8]; Chakraborty et al., [3]) but recently dynamic network algorithms were developed (Hirsch, [6]; Reiss & Taylor, [10]; Schmidhuber, [13]; Williams & Zipser, [14]).We explain the advantages of these algorithms and the problems that occur due to their computational requirements. We introduce a method for lowering this requirement by splitting a temporal task into a (smaller) temporal part and a static or non-temporal part. By doing so we obtain the advantages of both methods: the inherent implementation of unknown time dependencies with the dynamic neural network and the low computational effort of the static neural network. We demonstrate this approach on a simple diagnosis problem.