Radial basis functions for multivariable interpolation: a review
Algorithms for approximation
Emergence of grandmother memory in feed forward networks: learning with noise and forgetfulness
Connectionist models and their implications: readings from cognitive science
Dynamics and architecture for neural computation
Journal of Complexity - Special Issue on Neural Computation
Convergent activation dynamics in continuous time networks
Neural Networks
Dynamics of analog neural networks with time delay
Advances in neural information processing systems 1
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Dynamic behavior of constrained back propagation networks
Advances in neural information processing systems 2
Neural networks and the bias/variance dilemma
Neural Computation
Simplifying neural networks by soft weight-sharing
Neural Computation
Neural Computation
Regularization theory and neural networks architectures
Neural Computation
Training with noise is equivalent to Tikhonov regularization
Neural Computation
From data distributions to regularization in invariant learning
Neural Computation
Network Structuring and Training Using Rule-Based Knowledge
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Neural Computation
Learning state space trajectories in recurrent neural networks
Neural Computation
A convergence result for learning in recurrent neural networks
Neural Computation
Extended Kalman filter-based pruning method for recurrent neural networks
Neural Computation
On the Need for a Neural Abstract Machine
Sequence Learning - Paradigms, Algorithms, and Applications
On different facets of regularization theory
Neural Computation
Improving generalization capabilities of dynamic neural networks
Neural Computation
Letters: Prediction error of a fault tolerant neural network
Neurocomputing
IEEE Transactions on Neural Networks
Recursive Bayesian recurrent neural networks for time-series modeling
IEEE Transactions on Neural Networks
On the selection of weight decay parameter for faulty networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
We derive a smoothing regularizer for dynamic network models by requiring robustness in prediction performance to perturbations of the training data. The regularizer can be viewed as a generalization of the first-order Tikhonov stabilizer to dynamic models. For two layer networks with recurrent connections described by the training criterion with the regularizer is where Φ = {U, V, W} is the network parameter set, Z(t) are the targets, I(t) = {X(s), s = 1,2,..., t} represents the current and all historical input information, N is the size of the training data set, is the regularizer, and λ is a regularization parameter. The closed-form expression for the regularizer for time-lagged recurrent networks is where || || is the Euclidean matrix norm and γ is a factor that depends upon the maximal value of the first derivatives of the internal unit activations f(). Simplifications of the regularizer are obtained for simultaneous recurrent nets (τ → 0), two-layer feedforward nets, and one layer linear nets. We have successfully tested this regularizer in a number of case studies and found that it performs better than standard quadratic weight decay.