Learning translation invariant recognition in massively parallel networks
Volume I: Parallel architectures on PARLE: Parallel Architectures and Languages Europe
The “moving targets” training algorithm
Advances in neural information processing systems 2
Pedagogical pattern selection strategies
Neural Networks
The nature of statistical learning theory
The nature of statistical learning theory
Natural gradient works efficiently in learning
Neural Computation
Comparing support vector machines with Gaussian kernels to radialbasis function classifiers
IEEE Transactions on Signal Processing
IEEE Transactions on Neural Networks
Cost functions to estimate a posteriori probabilities in multiclass problems
IEEE Transactions on Neural Networks
Long-range out-of-sample properties of autoregressive neural networks
Neural Computation
Local coupled feedforward neural network
Neural Networks
Hi-index | 0.00 |
The attractive possibility of applying layerwise block training algorithms to multilayer perceptrons MLP, which offers initial advantages in computational effort, is refined in this article by means of introducing a sensitivity correction factor in the formulation. This results in a clear performance advantage, which we verify in several applications. The reasons for this advantage are discussed and related to implicit relations with second-order techniques, natural gradient formulations through Fisher's information matrix, and sample selection. Extensions to recurrent networks and other research lines are suggested at the close of the article.