On the capabilities of multilayer perceptrons
Journal of Complexity - Special Issue on Neural Computation
Learning in threshold networks
COLT '88 Proceedings of the first annual workshop on Computational learning theory
&egr;-Entropy and the complexity of feedforward neural networks
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
Depth-Size Tradeoffs for Neural Computation
IEEE Transactions on Computers - Special issue on artificial neural networks
Polynomial threshold functions, AC0 functions, and spectral norms
SIAM Journal on Computing
Circuit complexity and neural networks
Circuit complexity and neural networks
The Handbook of Brain Theory and Neural Networks
The Handbook of Brain Theory and Neural Networks
Fundamentals of Artificial Neural Networks
Fundamentals of Artificial Neural Networks
Handbook of Neural Computation
Handbook of Neural Computation
Tight Bounds on the Size of Neural Networks for Classification Problems
IWANN '97 Proceedings of the International Work-Conference on Artificial and Natural Neural Networks: Biological and Artificial Computation: From Neuroscience to Technology
On the Possibilities of the Limited Precision Weights Neural Networks in Classification Problems
IWANN '97 Proceedings of the International Work-Conference on Artificial and Natural Neural Networks: Biological and Artificial Computation: From Neuroscience to Technology
Direct Synthesis of Neural Networks
MICRONEURO '96 Proceedings of the 5th International Conference on Microelectronics for Neural Networks and Fuzzy Systems
Hi-index | 0.00 |
This paper presents a constructive approach to estimating the sizeof a neural network necessary to solve a given classification problem. Theresults are derived using an information entropy approach in the context oflimited precision integer weights. Such weights are particularly suited forhardware implementations since the area they occupy is limited, and thecomputations performed with them can be efficiently implemented inhardware. The considerations presented use an information entropyperspective and calculate lower bounds on the number of bits needed inorder to solve a given classification problem. These bounds are obtained byapproximating the classification hypervolumes with the volumes of severalregular (i.e., highly symmetric) n-dimensional bodies. The bounds givenhere allow the user to choose the appropriate size of a neural network suchthat: (i) the given classification problem can be solved, and (ii) thenetwork architecture is not oversized. All considerations presented takeinto account the restrictive case of limited precision integer weights, andtherefore can be directly applied when designing VLSI implementations ofneural networks.