Optimization: algorithms and consistent approximations
Optimization: algorithms and consistent approximations
A differential adaptive learning rate method for back-propagation neural networks
NN'09 Proceedings of the 10th WSEAS international conference on Neural networks
KES-AMSTA'10 Proceedings of the 4th KES international conference on Agent and multi-agent systems: technologies and applications, Part II
Smooth function approximation using neural networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is implemented for the approximation of smooth batch data containing input-output of the hidden neurons and the final neural output of the network. The training set is associated with the adjustable parameters of the network by weight equations which may be compatible or incompatible. Then in case the nonlinear and linear weight equations are compatible we obtain the exact solutions of these equations. Otherwise, we get the unique approximate solution with minimal norm such that the norm of the difference between the left and right handsides of these equations reaches the minimal value. This approach allows us to find a novel adaptive learning rate. Using the multi-agent system as the different kinds of energies for the plant growth and the multi-agent system as concentrations of different substances in the chemical reaction of higher order, one can predict the height of the plant and the concentrations of the substances respectively.