Biological Cybernetics
Neural Networks in Computer Intelligence
Neural Networks in Computer Intelligence
Advanced Methods in Neural Computing
Advanced Methods in Neural Computing
Optimizing synaptic weights of neural networks
ICECT'03 Proceedings of the third international conference on Engineering computational technology
Prediction of grid-connected photovoltaic system output using evolutionary programming-ANN models
AIKED'09 Proceedings of the 8th WSEAS international conference on Artificial intelligence, knowledge engineering and data bases
Hi-index | 0.00 |
An evolutionary method of training a neural network is described and illustrated. Fundamental changes are needed in the usual training methods due to the need to provide the algorithm with discrete values of the variables (weights). In order to give the algorithm freedom to select weights from an unlimited range of values, mutation in integer variables produces a progressive 'shift' of the centre of the range of positive/negative values provided for selection. At each iteration the range of integer values offered to the algorithm is randomly selected. The variables are mutated in shuffled order; each successful mutation is captured by the algorithm, unsuccessful mutations are rejected. As the error progresses towards the target level, the rate of progress is controlled by progressively adapting the numerical range within which the mutation shifts are applied. The method is used to train illustrative networks to predict values of a simple trigonometric function, to provide an approximate analysis of reinforced concrete deep beams and to predict overall buckling loads for rectangular hollow steel sections. The results obtained using the new algorithm, are compared with those from conventional back-propagation (BP) training and with 'exact' results.