Analog VLSI and neural systems
Analog VLSI and neural systems
VLSI implementation of a neural network model
Artificial neural networks
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
The cascade-correlation learning architecture
Advances in neural information processing systems 2
A self-adjusting dynamic logic module
Journal of Parallel and Distributed Computing
Initializing back propagation networks with prototypes
Neural Networks
Highly parallel computing
Journal of Artificial Neural Networks
A transformation for implementing localist neural networks
Neural, Parallel & Scientific Computations
An efficient transformation for implementing two-layer feedforward neural networks
Journal of Artificial Neural Networks
Dream machine: a platform for efficient implementation of neural networks with arbitrarily complex interconnection structures
A simple procedure for pruning back-propagation trained neural networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Most artificial neural networks (ANNs) have a fixed topology during learning, and often suffer from a number of shortcomings as a result. Variations of ANNs that use dynamic topologies have shown ability to overcome many of these problems. This paper introduces location-independent transformations (LITs) as a general strategy for implementing distributed feed forward networks that use dynamic topologies (dynamic ANNs) efficiently in parallel hardware. A LIT creates a set of location-independent nodes, where each node computes its part of the network output independent of other nodes, using local information. This type of transformation allows efficient support for adding and deleting nodes dynamically during learning. In particular, this paper presents a LIT that supports both the standard (static) multilayer backpropagation network, and backpropagation with dynamic extensions. The complexity of both learning and execution algorithms is O(q(Nlog M)) for a single pattern, where q is the number of weight layers in the original network, N the number of nodes in the widest node layer in the original network, and M is the number of nodes in the transformed network (which is linear in the number hidden nodes in the original network). This paper extends previous work with 2-weight-layer backpropagation networks.