Neural computing: theory and practice
Neural computing: theory and practice
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
MLDM'05 Proceedings of the 4th international conference on Machine Learning and Data Mining in Pattern Recognition
Hi-index | 0.01 |
A matrix method is described that optimizes the set of weights and biases for the output side of a network with a single hidden layer of neurons, given any set of weights and biases for the input side of the hidden layer. All the input patterns are included in a single optimization cycle. A simple iterative minimization procedure is used to optimize the weights and biases on the input side of the hidden layer. Many test problems have been solved, confirming the validity of the method. The results suggest that for a network with a single layer of hidden sigmoidal nodes, the accuracy of a functional representation is reduced as the nonlinearity of the function increases.