Almost optimal lower bounds for small depth circuits
STOC '86 Proceedings of the eighteenth annual ACM symposium on Theory of computing
VLSI implementation of a neural network memory with several hundreds of neurons
AIP Conference Proceedings 151 on Neural Networks for Computing
Parallel computation with threshold functions
Journal of Computer and System Sciences - Structure in Complexity Theory Conference, June 2-5, 1986
Analog VLSI and neural systems
Analog VLSI and neural systems
Multilayer feedforward networks are universal approximators
Neural Networks
Limited interconnectivity in synthetic neural systems
Neural Computers
What size net gives valid generalization?
Neural Computation
On the power of neural networks for solving hard problems
Journal of Complexity
Performance of a stochastic learning microchip
Advances in neural information processing systems 1
Learning in threshold networks
COLT '88 Proceedings of the first annual workshop on Computational learning theory
Artificial neural networks: electronic implementations
Artificial neural networks: electronic implementations
Approximation capabilities of multilayer feedforward networks
Neural Networks
VLSI implementations of learning and memory systems: a review
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
SFCS '91 Proceedings of the 32nd annual symposium on Foundations of computer science
IEEE Transactions on Computers - Special issue on artificial neural networks
Depth-Size Tradeoffs for Neural Computation
IEEE Transactions on Computers - Special issue on artificial neural networks
Polynomial threshold functions, AC0 functions, and spectral norms
SIAM Journal on Computing
Reduced order LQG controllers for linear time varying plants
Systems & Control Letters
Size-depth trade-offs for threshold circuits
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Simulating threshold circuits by majority circuits
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Threshold circuits of bounded depth
Journal of Computer and System Sciences
Explicit Constructions of Depth-2 Majority Circuits for Comparison and Addition
SIAM Journal on Discrete Mathematics
On Optimal Depth Threshold Circuits for Multiplication andRelated Problems
SIAM Journal on Discrete Mathematics
Circuit complexity and neural networks
Circuit complexity and neural networks
Optimal depth, very small size circuits for symmetric functions in AC0
Information and Computation
A universal mapping for Kolmogorov's superposition theorem
Neural Networks
Bounds on the Sample Complexity of Bayesian Learning Using Information Theory and the VC Dimension
Machine Learning - Special issue on computational learning theory
On the node complexity of neural networks
Neural Networks
Discrete neural computation: a theoretical foundation
Discrete neural computation: a theoretical foundation
A numerical implementation of Kolmogorov's superpositions
Neural Networks
Learning in neural networks: VLSI implementation strategies
Fuzzy logic and neural network handbook
On the circuit complexity of sigmoid feedforward neural networks
Neural Networks
A numerical implementation of Komogorov's superpositions II
Neural Networks
Constant fan-in digital neural networks are VLSI-optimal
MANNA '95 Proceedings of the first international conference on Mathematics of neural networks : models, algorithms and applications: models, algorithms and applications
Neuromorphic systems engineering: neural networks in silicon
Neuromorphic systems engineering: neural networks in silicon
Neuromorphic learning VLSI systems: a survey
Neuromorphic systems engineering
Deeper Sparsely Nets can be Optimal
Neural Processing Letters
A Constructive Approach to Calculating Lower Entropy Bounds
Neural Processing Letters
Handbook of Neural Computation
Handbook of Neural Computation
The constraint based decomposition (CBD) training architecture
Neural Networks
Summed Weight Neuron Perturbation: An O(N) Improvement Over Weight Perturbation
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Optimal Depth Neural Networks for Multiplication and Related Problems
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Computing with Almost Optimal Size Neural Networks
Advances in Neural Information Processing Systems 5, [NIPS Conference]
A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization
Advances in Neural Information Processing Systems 5, [NIPS Conference]
On Small Depth Threshold Circuits
SWAT '92 Proceedings of the Third Scandinavian Workshop on Algorithm Theory
On Small Depth Threshold Circuits
SWAT '92 Proceedings of the Third Scandinavian Workshop on Algorithm Theory
Cascade Error Projection: A Learning Algorithm for Hardware Implementation
IWANN '99 Proceedings of the International Work-Conference on Artificial and Natural Neural Networks: Foundations and Tools for Neural Modeling
On the Power of Networks of Majority Functions
IWANN '91 Proceedings of the International Workshop on Artificial Neural Networks
On Lower Bounds for the Depth of Threshold Circuits with Weights from {-1, 0, +1}
Algorithmic Learning for Knowledge-Based Systems, GOSLER Final Report
Some Notes on Threshold Circuits, and Multiplication in Depth 4
FCT '91 Proceedings of the 8th International Symposium on Fundamentals of Computation Theory
On limited fan-in optimal neural networks
SBRN '97 Proceedings of the 4th Brazilian Symposium on Neural Networks (SBRN '97)
On connectionist models
The effects of quantization on multilayer neural networks
IEEE Transactions on Neural Networks
A Comparison Study for a Neural Network Based Embedded Appliance
Proceedings of the 2011 conference on Neural Nets WIRN10: Proceedings of the 20th Italian Workshop on Neural Nets
Design and evaluation of neural networks for an embedded application
IEA/AIE'10 Proceedings of the 23rd international conference on Industrial engineering and other applications of applied intelligent systems - Volume Part III
An efficient hardware architecture for a neural network activation function generator
ISNN'06 Proceedings of the Third international conference on Advances in Neural Networks - Volume Part III
Brief paper: An optimized discrete neural network in embedded systems for road recognition
Engineering Applications of Artificial Intelligence
A defect-tolerant accelerator for emerging high-performance applications
Proceedings of the 39th Annual International Symposium on Computer Architecture
MICAI'12 Proceedings of the 11th Mexican international conference on Advances in Computational Intelligence - Volume Part II
DianNao: a small-footprint high-throughput accelerator for ubiquitous machine-learning
Proceedings of the 19th international conference on Architectural support for programming languages and operating systems
Information Sciences: an International Journal
Hi-index | 0.01 |
This paper analyzes some aspects of the computational power of neural networks using integer weights in a very restricted range. Using limited range integer values opens the road for efficient VLSI implementations because: (i) a limited range for the weights can be translated into reduced storage requirements and (ii) integer computation can be implemented in a more efficient way than the floating point one. The paper concentrates on classification problems and shows that, if the weights are restricted in a drastic way (both range and precision), the existence of a solution is not to be taken for granted anymore. The paper presents an existence result which relates the difficulty of the problem as characterized by the minimum distance between patterns of different classes to the weight range necessary to ensure that a solution exists. This result allows us to calculate a weight range for a given category of problems and be confident that the network has the capability to solve the given problems with integer weights in that range. Worst-case lower bounds are given for the number of entropy bits and weights necessary to solve a given problem. Various practical issues such as the relationship between the information entropy bits and storage bits are also discussed. The approach presented here uses a worst-case analysis. Therefore, the approach tends to overestimate the values obtained for the weight range, the number of bits and the number of weights. The paper also presents some statistical considerations that can be used to give up the absolute confidence of a successful training in exchange for values more appropriate for practical use. The approach presented is also discussed in the context of the VC-complexity.