The cascade-correlation learning architecture
Advances in neural information processing systems 2
Solving the N-bit parity problem using neural networks
Neural Networks
Improved computation for Levenberg-Marquardt training
IEEE Transactions on Neural Networks
Two highly efficient second-order algorithms for training feedforward networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Design and analysis of a general recurrent neural network model for time-varying matrix inversion
IEEE Transactions on Neural Networks
Parameter Incremental Learning Algorithm for Neural Networks
IEEE Transactions on Neural Networks
A New Jacobian Matrix for Optimal Learning of Single-Layer Neural Networks
IEEE Transactions on Neural Networks
Training Two-Layered Feedforward Networks With Variable Projection Method
IEEE Transactions on Neural Networks
Robust Neural Network Tracking Controller Using Simultaneous Perturbation Stochastic Approximation
IEEE Transactions on Neural Networks
A Constrained Optimization Approach to Preserving Prior Knowledge During Incremental Training
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Multilayer Potts Perceptrons With Levenberg–Marquardt Learning
IEEE Transactions on Neural Networks
Development and performance evaluation of an improved complex valued radar pulse compressor
Engineering Applications of Artificial Intelligence
Neural Network Guided Spatial Fault Resilience in Array Processors
Journal of Electronic Testing: Theory and Applications
Hi-index | 0.00 |
The method introduced in this paper allows for training arbitrarily connected neural networks, therefore, more powerful neural network architectures with connections across layers can be efficiently trained. The proposed method also simplifies neural network training, by using the forward-only computation instead of the traditionally used forward and backward computation. Information needed for the gradient vector (for first-order algorithms) and Jacobian or Hessian matrix (for second-order algorithms) is obtained during forward computation. With the proposed algorithm, it is now possible to solve the same problems using a much smaller number of neurons because the proposed algorithm is able to train more complex neural network architectures that require a smaller number of neurons. Comparison results of computation cost show that the proposed forward-only computation can be faster than the traditional implementation of the Levenberg-Marquardt algorithm.