Comparing biases for minimal network construction with back-propagation
Advances in neural information processing systems 1
The cascade-correlation learning architecture
Advances in neural information processing systems 2
Advances in neural information processing systems 2
A resource-allocating network for function interpolation
Neural Computation
Neural Computation
A practical Bayesian framework for backpropagation networks
Neural Computation
Simplifying neural networks by soft weight-sharing
Neural Computation
Improving Generalization with Active Learning
Machine Learning - Special issue on structured connectionist systems
Regularization theory and neural networks architectures
Neural Computation
Bayesian regularization and pruning using a Laplace prior
Neural Computation
Structural learning with forgetting
Neural Networks
Pruning using parameter and neuronal metrics
Neural Computation
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Second Order Derivatives for Network Pruning: Optimal Brain Surgeon
Advances in Neural Information Processing Systems 5, [NIPS Conference]
The lack of a priori distinctions between learning algorithms
Neural Computation
The existence of a priori distinctions between learning algorithms
Neural Computation
A Novel Pruning Algorithm for Optimizing Feedforward Neural Network of Classification Problems
Neural Processing Letters
Hi-index | 0.00 |
In this paper we proposed two new variants of backpropagation algorithm. The common point of these two new algorithms is that the outputs of nodes in the hidden layers are controlled with the aim to solve the moving target problem and the distributed weights problem. One algorithm (AlgoRobust) is not so insensitive to the noises in the data, the second one (AlgoGS) is through using Gauss-Schmidt algorithm to determine in each epoch which weight should be updated, while the other weights are kept unchanged in this epoch. In this way a better generalization can be obtained. Some theoretical explanations are also provided. In addition, simulation comparisons are made between Gaussian regularizer, optimal brain damage (OBD) and the proposed algorithms. Simulation results confirm that the new proposed algorithms perform better than that of Gaussian regularizer, and the first algorithm AlgoRobust performs better than the second algorithm AlgoGS in the noisy data. On the other hand AlgoGS performs better than the AlgoRobust on the data without noise and the final structure obtained by two new algorithms is comparable to that obtained by using OBD.