An Adaptive User-Interface-Agent Modeling Communication Availability
UM '01 Proceedings of the 8th International Conference on User Modeling 2001
2005 Special Issue: A comparative study of autoregressive neural network hybrids
Neural Networks - 2005 Special issue: IJCNN 2005
Learning temporal structure for task based control
Image and Vision Computing
A general updating rule for discrete hopfield-type neural network with delay
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
Exogenous fault detection in a collective robotic task
ECAL'07 Proceedings of the 9th European conference on Advances in artificial life
Load forecasting using wavelet fuzzy neural network
International Journal of Knowledge-based and Intelligent Engineering Systems
Hi-index | 0.00 |
In this work, we characterize and contrast the capabilities of the general class of time-delay neural networks (TDNNs) with input delay neural networks (IDNNs), the subclass of TDNNs with delays limited to the inputs. Each class of networks is capable of representing the same set of languages, those embodied by the definite memory machines (DMMs), a subclass of finite-state machines. We demonstrate the close affinity between TDNNs and DMM languages by learning a very large DMM (2048 states) using only a few training examples. Even though both architectures are capable of representing the same class of languages, they have distinguishable learning biases. Intuition suggests that general TDNNs which include delays in hidden layers should perform well, compared to IDNNs, on problems in which the output can be expressed as a function on narrow input windows which repeat in time. On the other hand, these general TDNNs should perform poorly when the input windows are wide, or there is little repetition. We confirm these hypotheses via a set of simulations and statistical analysis