Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Neural networks and analog computation: beyond the Turing limit
Neural networks and analog computation: beyond the Turing limit
Modeling word perception using the Elman network
Neurocomputing
ICONIP'06 Proceedings of the 13 international conference on Neural Information Processing - Volume Part I
Hi-index | 0.00 |
This paper presents a novel training method for Elman network to encode the words in literary works. This network has been used in studying limited simple artificial sentences with varying degrees of success. This paper shows how to use it to process real-world works. Both word codes and network weights can be accomplished by the method. Each trained code is a distributed representation of its word. The training error can be drastically reduced by iteratively re-encoding the representations. Several distinct findings and results during the training process are reported.