Three-Dimensional Structured Networks for Matrix Equation Solving
IEEE Transactions on Computers - Special issue on artificial neural networks
Parallel structured networks for solving a wide variety of matrix algebra problems
Journal of Parallel and Distributed Computing - Special issue on neural computing on massively parallel processing
Neural network approach to computing matrix inversion
Applied Mathematics and Computation
A recurrent neural network for real-time matrix inversion
Applied Mathematics and Computation
Adaptive associative reward-penalty algorithms for sigma-pi networks
Neural, Parallel & Scientific Computations
Solving simultaneous linear equations using recurrent neural networks
Information Sciences—Intelligent Systems: An International Journal
Artificial intelligence: a modern approach
Artificial intelligence: a modern approach
Adaptive critic for sigma-pi networks
Neural Networks
Partially pre-calculated weights for backpropagation training of RAM-based Sigma-pi nets
SCAI '97 Proceedings of the sixth Scandinavian conference on Artificial intelligence
The Essence of Artificial Intelligence
The Essence of Artificial Intelligence
Learning Probabilistic RAM Nets Using VLSI Structures
IEEE Transactions on Computers
Learning in networks of structured hypercubes
Learning in networks of structured hypercubes
Artificial neural networks for solving ordinary and partial differential equations
IEEE Transactions on Neural Networks
A backpropagation and initialization routine for hyperbolic sigma-pi neural networks
Neural, Parallel & Scientific Computations
MICAI'12 Proceedings of the 11th Mexican international conference on Advances in Computational Intelligence - Volume Part II
Information Sciences: an International Journal
Hi-index | 0.00 |
This paper presents a methodology that reflected functions by reflecting the weight matrices of an artificial neural network. One of the major problems with the connectionist approach is that trained neural networks can only associate fixed sets of input-output mappings. We provide a methodology which allows the post-trained net to associate different input-output mappings. The different mappings are reflected in a horizontal axis, reflected in a vertical axis and scaling of the initial mapping. The methodology does not train the net on the different mappings but it transforms the weight matrix of the neural network. This paper describes a novel way of utilising sigma-pi neural networks. Our new methodology manipulates sigma-pi unit's weight matrices which transform the unit's output. The weights are cast in a matrix formulation, and then transformations can be performed on the weight matrix of the sigma-pi net. To test the new methodology, the following three steps were carried out on a neural network: (1) the network was trained to perform a mapping function, f; (2) the weights of the network were transformed; and (3) the network was tested to evaluate whether it performs the reflection in the vertical axis, fref-vert(x)= a--f(x). This reflects the function in one dimension. A reflection transformation was used to manipulate the network's weight matrices to obtain a reflection in the vertical axis. Note that the network was not trained to perform the reflection in the vertical axis. The transformation of the weight matrix transformed the function the output performs. This article explains the theory which enables us to perform transformations of sigma-pi networks and obtain reflections of the output by reflecting the weight matrices. These transforms empower the network to perform related mapping tasks once one mapping task has been learnt. This article explains how each transformation is performed and it considers whether a set of 'standard' transformations can indeed be derived.