Stability of the random neural network model
Neural Computation
Learning in the recurrent random neural network
Neural Computation
Projected Gradient Methods for Nonnegative Matrix Factorization
Neural Computation
Random neural networks with synchronized interactions
Neural Computation
Synchronized Interactions in Spiked Neuronal Networks
The Computer Journal
Learning in the feed-forward random neural network: A critical review
Performance Evaluation
An initiative for a classified bibliography on G-networks
Performance Evaluation
Hi-index | 0.00 |
In this paper, a novel supervised batch learning algorithm for the Random Neural Network (RNN) is proposed. The RNN equations associated with training are purposively approximated to obtain a linear Nonnegative Least Squares (NNLS) problem that is strictly convex and can be solved to optimality. Following a review of selected algorithms, a simple and efficient approach is employed after being identified to be able to deal with large scale NNLS problems. The proposed algorithm is applied to a combinatorial optimization problem emerging in disaster management, and is shown to have better performance than the standard gradient descent algorithm for the RNN.