Multilayer feedforward networks are universal approximators
Neural Networks
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Information-based objective functions for active data selection
Neural Computation
On the geometry of feedforward neural network error surfaces
Neural Computation
Neural network exploration using optimal experiment design
Neural Networks
Linearization of F-1 curves by adaptation
Neural Computation
Nonlinear V1 responses to natural scenes revealed by neural network analysis
Neural Networks - 2004 Special issue Vision and brain
Asymptotic Theory of Information-Theoretic Experimental Design
Neural Computation
Dynamics of learning near singularities in layered networks
Neural Computation
Functionally equivalent feedforward neural networks
Neural Computation
Hi-index | 0.00 |
It is generally unknown when distinct neural networks having different synaptic weights and thresholds implement identical input-output transformations. Determining the exact conditions for structurally distinct yet functionally equivalent networks may shed light on the theoretical constraints on how diverse neural circuits might develop and be maintained to serve identical functions. Such consideration also imposes practical limits on our ability to uniquely infer the structure of underlying neural circuits from stimulus-response measurements. We introduce a biologically inspired mathematical method for determining when the structure of a neural network can be perturbed gradually while preserving functionality. We show that for common three-layer networks with convergent and nondegenerate connection weights, this is possible only when the hidden unit gains are power functions, exponentials, or logarithmic functions, which are known to approximate the gains seen in some biological neurons. For practical applications, our numerical simulations with finite and noisy data show that continuous confounding of parameters due to network functional equivalence tends to occur approximately even when the gain function is not one of the aforementioned three types, suggesting that our analytical results are applicable to more general situations and may help identify a common source of parameter variability in neural network modeling.