Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Adaptive filter theory (2nd ed.)
Adaptive filter theory (2nd ed.)
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
NICROSP '96 Proceedings of the 1996 International Workshop on Neural Networks for Identification, Control, Robotics, and Signal/Image Processing (NICROSP '96)
Hi-index | 0.00 |
The training of a neural network can be made using many different procedures; they allow to find the weights that minimize the discrepancies between targets and actual outputs of the network. The optimal weights can be found either in a direct way or using iterative techniques; in both cases it's sometimes necessary (or simply useful) to evaluate the pseudo-inverse matrix of the projections of input examples into the function space created by the network. Every operation we have to perform to do this can however become difficult (and sometimes impossible) when the dimension of this matrix is very large, so we deal with a way to subdivide it and to obtain our aim by a high parallel algorithm.