Extended Random Neural Networks
WIRN VIETRI 2002 Proceedings of the 13th Italian Workshop on Neural Nets-Revised Papers
Networking with Cognitive Packets
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
ICTAI '99 Proceedings of the 11th IEEE International Conference on Tools with Artificial Intelligence
Using adaptive routing to achieve quality of service
Performance Evaluation
Optimal and heuristic algorithms for quality-of-service routing with multiple constraints
Performance Evaluation
Neural network based optimal control of a biosynthesis process
International Journal of Knowledge-based and Intelligent Engineering Systems - Advanced Intelligent Techniques in Engineering Applications
Random neural networks with synchronized interactions
Neural Computation
Performance Evaluation
Pruned neural networks for regression
PRICAI'00 Proceedings of the 6th Pacific Rim international conference on Artificial intelligence
Autonomous search for information in an unknown environment
CIA'99 Proceedings of the 3rd international conference on Cooperative information agents III
An Interview with Erol Gelenbe
Ubiquity
Learning in the feed-forward random neural network: A critical review
Performance Evaluation
An initiative for a classified bibliography on G-networks
Performance Evaluation
Erol gelenbe's career and contributions
ISCIS'05 Proceedings of the 20th international conference on Computer and Information Sciences
Hardware implementation of random neural networks with reinforcement learning
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
Bibliography on G-networks, negative customers and applications
Mathematical and Computer Modelling: An International Journal
Modelling and analysis of gene regulatory networks based on the G-network
International Journal of Advanced Intelligence Paradigms
Multiobjective learning in the random neural network
International Journal of Advanced Intelligence Paradigms
Hi-index | 0.00 |
Examines the function approximation properties of the “random neural-network model” or GNN, The output of the GNN can be computed from the firing probabilities of selected neurons. We consider a feedforward bipolar GNN (BGNN) model which has both “positive and negative neurons” in the output layer, and prove that the BGNN is a universal function approximator. Specifically, for any f∈C([0,1]s) and any ε>0, we show that there exists a feedforward BGNN which approximates f uniformly with error less than ε. We also show that after some appropriate clamping operation on its output, the feedforward GNN is also a universal function approximator