Cooling schedules for optimal annealing
Mathematics of Operations Research
On the capabilities of multilayer perceptrons
Journal of Complexity - Special Issue on Neural Computation
What size net gives valid generalization?
Neural Computation
Robust trainability of single neurons
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Bounds for the computational power and learning complexity of analog neural nets
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Bounds on the Sample Complexity of Bayesian Learning Using Information Theory and the VC Dimension
Machine Learning - Special issue on computational learning theory
Bounding the Vapnik-Chervonenkis Dimension of Concept Classes Parameterized by Real Numbers
Machine Learning - Special issue on COLT '93
On the complexity of learning on neural nets
Euro-COLT '93 Proceedings of the first European conference on Computational learning theory
Polynomial bounds for VC dimension of sigmoidal and general Pfaffian neural networks
Journal of Computer and System Sciences - Special issue: dedicated to the memory of Paris Kanellakis
Neural networks with quadratic VC dimension
Journal of Computer and System Sciences - Special issue: dedicated to the memory of Paris Kanellakis
Nondeterministic NC1 computation
Journal of Computer and System Sciences - Eleventh annual conference on structure and complexity 1996
Almost linear VC-dimension bounds for piecewise polynomial networks
Neural Computation
Combining the Perceptron Algorithm with Logarithmic Simulated Annealing
Neural Processing Letters
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
No free lunch theorems for optimization
IEEE Transactions on Evolutionary Computation
IEEE Transactions on Information Theory
MUSP'09 Proceedings of the 9th WSEAS international conference on Multimedia systems & signal processing
Designing neural networks for tackling hard classification problems
WSEAS TRANSACTIONS on SYSTEMS
Hi-index | 0.00 |
Estimating a priori the size of neural networks for achieving high classification accuracy is a hard problem. Existing studies provide theoretical upper bounds on the size of neural networks that are unrealistic to implement. This work provides a computational study for estimating the size of neural networks using as an estimation parameter the size of available training data. We will also show that the size of a neural network is problem dependent and that one only needs the number of available training data to determine the size of the required network for achieving high classification rate. We use for our experiments a threshold neural network that combines the perceptron algorithm with simulated annealing and we tested our results on datasets from the UCI Machine Learning Repository. Based on our experimental results, we propose a formula to estimate the number of perceptrons that have to be trained in order to achieve a high classification accuracy.