Introduction to the theory of neural computation
Introduction to the theory of neural computation
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
The cascade-correlation learning architecture
Advances in neural information processing systems 2
MANNA '95 Proceedings of the first international conference on Mathematics of neural networks : models, algorithms and applications: models, algorithms and applications
Forecasting S&P 500 stock index futures with a hybrid AI system
Decision Support Systems
The softening learning procedure
Mathematical and Computer Modelling: An International Journal
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
Knowledge-internalization process for neural-networks practitioners
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
The evolution of internal representation
Mathematical and Computer Modelling: An International Journal
Hi-index | 0.98 |
Reasoning Neural Networks (RN) adopts the layered feedforward network structure, and its learning algorithm belongs to the weight-and-structure-change category of learning algorithm. In this paper, we firstly explain that, in the layered feedforward network, the essential characteristic of the mapping between two consecutive layers is the level-adjacent mapping, in which level-adjacent patterns in the previous-layer space are mapped to similar patterns in the latter-layer space. Then, we explain how RN's learning algorithm handles the undesired predicaments associated with the back propagation learning algorithm.