Cooling schedules for optimal annealing
Mathematics of Operations Research
On the complexity of loading shallow neural networks
Journal of Complexity - Special Issue on Neural Computation
On the capabilities of multilayer perceptrons
Journal of Complexity - Special Issue on Neural Computation
What size net gives valid generalization?
Neural Computation
The perceptron algorithm is fast for nonmalicious distributions
Neural Computation
On the computational power of sigmoid versus boolean threshold circuits (extended abstract)
SFCS '91 Proceedings of the 32nd annual symposium on Foundations of computer science
Robust trainability of single neurons
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Bounds for the computational power and learning complexity of analog neural nets
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Bounding the Vapnik-Chervonenkis Dimension of Concept Classes Parameterized by Real Numbers
Machine Learning - Special issue on COLT '93
On the complexity of learning on neural nets
Euro-COLT '93 Proceedings of the first European conference on Computational learning theory
Bounds for the Computational Power and Learning Complexity of Analog Neural Nets
SIAM Journal on Computing
Polynomial bounds for VC dimension of sigmoidal and general Pfaffian neural networks
Journal of Computer and System Sciences - Special issue: dedicated to the memory of Paris Kanellakis
Neural networks with quadratic VC dimension
Journal of Computer and System Sciences - Special issue: dedicated to the memory of Paris Kanellakis
Almost linear VC-dimension bounds for piecewise polynomial networks
Neural Computation
Combining the Perceptron Algorithm with Logarithmic Simulated Annealing
Neural Processing Letters
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Neural network control of an inverted pendulum on a cart
ROCOM'09 Proceedings of the 9th WSEAS international conference on Robotics, control and manufacturing technology
A simulated annealing approach for social utility maximization in dynamic spectrum management
IMCAS'09 Proceedings of the 8th WSEAS international conference on Instrumentation, measurement, circuits and systems
An experimental decision of samples for RBF neural networks
MUSP'09 Proceedings of the 9th WSEAS international conference on Multimedia systems & signal processing
Estimating the size of neural networks from the number of available training data
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
A Note on a priori Estimations of Classification Circuit Complexity
Fundamenta Informaticae - Hardest Boolean Functions and O.B. Lupanov
Multilayer perceptrons applied to traffic sign recognition tasks
IWANN'05 Proceedings of the 8th international conference on Artificial Neural Networks: computational Intelligence and Bioinspired Systems
No free lunch theorems for optimization
IEEE Transactions on Evolutionary Computation
IEEE Transactions on Information Technology in Biomedicine
Fuzzy CoCo: a cooperative-coevolutionary approach to fuzzy modeling
IEEE Transactions on Fuzzy Systems
IEEE Transactions on Information Theory
IEEE Transactions on Neural Networks
WSEAS TRANSACTIONS on SYSTEMS
Hi-index | 0.00 |
When designing neural networks for tackling hard classification problems researchers face the trivial problem of deciding the appropriate size of the neural network. The problem of optimizing the size of a neural network for obtaining high classification accuracy in datasets is indeed a hard problem in the literature. Existing studies provide theoretical upper bounds on the size of neural networks that are unrealistic to implement. Alternatively, optimizing empirically the neural network size may need a large number of experiments, which due to a considerable number of free parameters may become a real hard task in time and effort to accomplish. Hard classification problems are usually large in size datasets. Such datasets derive from collection of real world data like from multimedia content and are usually rich in training samples and rich in features that describe each collected sample. Working with neural networks and hard classification datasets will make even harder the task to optimize the neural network size. This work highlights on a mathematical formula for a priori calculating the size of a neural network for achieving high classification accuracy rate. The formula estimates neural networks size based only on the number of available training samples, resulting in sizes of neural networks that are realistic to implement. Using this formula in hard classification datasets aims to fix the size of an accurate neural network and allows researchers to concentrate on other aspects of their experiments. The focus on this approach turns to the number of available data for training the neural network, which is a new perspective in the neural network theory and the characteristics of this perspective are discussed in this article for designing neural networks for tackling hard classification problems.