Cooling schedules for optimal annealing
Mathematics of Operations Research
On the capabilities of multilayer perceptrons
Journal of Complexity - Special Issue on Neural Computation
What size net gives valid generalization?
Neural Computation
On the computational power of sigmoid versus boolean threshold circuits (extended abstract)
SFCS '91 Proceedings of the 32nd annual symposium on Foundations of computer science
Robust trainability of single neurons
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Bounds for the computational power and learning complexity of analog neural nets
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Bounding the Vapnik-Chervonenkis Dimension of Concept Classes Parameterized by Real Numbers
Machine Learning - Special issue on COLT '93
On the complexity of learning on neural nets
Euro-COLT '93 Proceedings of the first European conference on Computational learning theory
Bounds for the Computational Power and Learning Complexity of Analog Neural Nets
SIAM Journal on Computing
Polynomial bounds for VC dimension of sigmoidal and general Pfaffian neural networks
Journal of Computer and System Sciences - Special issue: dedicated to the memory of Paris Kanellakis
Neural networks with quadratic VC dimension
Journal of Computer and System Sciences - Special issue: dedicated to the memory of Paris Kanellakis
Almost linear VC-dimension bounds for piecewise polynomial networks
Neural Computation
Combining the Perceptron Algorithm with Logarithmic Simulated Annealing
Neural Processing Letters
Neural Solutions for High Range Resolution Radar Classification
IWANN '03 Proceedings of the 7th International Work-Conference on Artificial and Natural Neural Networks: Part II: Artificial Neural Nets Problem Solving Methods
Estimating the size of neural networks from the number of available training data
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
A Note on a priori Estimations of Classification Circuit Complexity
Fundamenta Informaticae - Hardest Boolean Functions and O.B. Lupanov
Multilayer perceptrons applied to traffic sign recognition tasks
IWANN'05 Proceedings of the 8th international conference on Artificial Neural Networks: computational Intelligence and Bioinspired Systems
No free lunch theorems for optimization
IEEE Transactions on Evolutionary Computation
IEEE Transactions on Information Technology in Biomedicine
IEEE Transactions on Information Theory
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
The problem of optimizing the size of a neural network for obtaining high classification accuracy in datasets is a hard problem. Existing studies provide theoretical upper bounds on the size of neural networks that are unrealistic to implement. Alternatively, optimizing empirically the neural network size may need a large number of experiments, which due to a considerable number of free parameters may become a real hard task in time and effort to accomplish. Multimedia datasets are usually large in size datasets because they are rich in training samples and rich in features that describe each sample. Working with neural networks and multimedia datasets will make even harder the task to optimize the neural network size. This work presents a mathematical formula for a priori calculating the size of a neural network for achieving high classification accuracy rate. This formula estimates neural networks size based only on the number of available training samples, resulting in sizes of neural networks that are realistic to implement. Using this formula in multimedia datasets aims to fix the size of an accurate neural network and allows researchers to concentrate on other aspects of their experiments.