Introduction to artificial neural systems
Introduction to artificial neural systems
Neural networks and logistic regression: Part I
Computational Statistics & Data Analysis
Two strategies to avoid overfitting in feedforward networks
Neural Networks
A simulation study of artificial neural networks for nonlinear time-series forecasting
Computers and Operations Research
Genetic Algorithms in Search, Optimization and Machine Learning
Genetic Algorithms in Search, Optimization and Machine Learning
Design of structural modular neural networks with genetic algorithm
Advances in Engineering Software
Expert Systems with Applications: An International Journal
Fuzzy Delphi and back-propagation model for sales forecasting in PCB industry
Expert Systems with Applications: An International Journal
Use of genetic algorithms for neural networks to predict community-acquired pneumonia
Artificial Intelligence in Medicine
Genetic evolution of the topology and weight distribution of neural networks
IEEE Transactions on Neural Networks
Knowledge and Information Systems
Comparing performances of backpropagation and genetic algorithms in the data classification
Expert Systems with Applications: An International Journal
Expert Systems with Applications: An International Journal
Expert Systems with Applications: An International Journal
Expert Systems with Applications: An International Journal
Hi-index | 12.06 |
Many studies have mapped a bit-string genotype using a genetic algorithm to represent network architectures to improve performance of back-propagation networks (BPN). But the limitations of gradient search techniques applied to complex nonlinear optimization problems have often resulted in inconsistent and unpredictable performance. This study focuses on how to collect and re-evaluate the weight matrices of a BPN while the genetic algorithm operations are processing in each generation to optimize the weight matrices. In this way, overfitting, a drawback of BPNs that usually occurs during the later stage of neural network training with descending training error and ascending prediction error, can also be avoided. This study extends the parameters and topology of the neural network to enhance the feasibility of the solution space for complex nonlinear problems. The value of the proposed model is compared with previous studies using a Monte Carlo study on in-sample, interpolation, and extrapolation data for six test functions.