Adaptation in natural and artificial systems
Adaptation in natural and artificial systems
Self-Organizing Maps
Neural Networks for Optimization and Signal Processing
Neural Networks for Optimization and Signal Processing
A continuous Hopfield network equilibrium points algorithm
Computers and Operations Research
Fast Constructive-Covering Algorithm for neural networks and its implement in classification
Applied Soft Computing
Expert Systems with Applications: An International Journal
Implementation feasibility of convex recursive deletion regions using multi-layer perceptrons
WSEAS Transactions on Computers
Constraint satisfaction problems solved by semidefinite relaxations
WSEAS Transactions on Computers
Robot mapping and map optimization using genetic algorithms and artificial neural networks
WSEAS Transactions on Computers
Better learning of supervised neural networks based on functional graph: an experimental approach
WSEAS Transactions on Computers
IEEE Transactions on Neural Networks
The continuous hopfield networks (CHN) for the placement of the electronic circuits problem
WSEAS Transactions on Computers
Image region segmentation based on the virtual edge current in digital images
NEHIPISIC'11 Proceeding of 10th WSEAS international conference on electronics, hardware, wireless and optical communications, and 10th WSEAS international conference on signal processing, robotics and automation, and 3rd WSEAS international conference on nanotechnology, and 2nd WSEAS international conference on Plasma-fusion-nuclear physics
WSEAS Transactions on Computers
Hi-index | 0.00 |
The artificial neural networks (ANN) have proven their efficiency in several applications: pattern recognition, voice and classification problems. The training stage is very important in the ANN's performance. The selection of the architecture of a neural network suitable to solve a given problem is one of the most important aspects of neural network research. The choice of the hidden layers number and the values of weights has a large impact on the convergence of the training algorithm. In this paper we propose a mathematical formulation in order to determine the optimal number of hidden layers and good values of weights. To solve this problem, we use genetic algorithms. The numerical results assess the effectiveness of the theorical results shown in this paper and computational experiments are presented, and the advantages of the new modelling.