Learning optimal discriminant functions through a cooperative game of automata
IEEE Transactions on Systems, Man and Cybernetics
Deterministic Learning Automata Solutions to the Equipartitioning Problem
IEEE Transactions on Computers
Learning automata: an introduction
Learning automata: an introduction
Neural network design and the complexity of learning
Neural network design and the complexity of learning
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Complexity Results on Learning by Neural Nets
Machine Learning
Creating artificial neural networks that generalize
Neural Networks
The cascade-correlation learning architecture
Advances in neural information processing systems 2
Original Contribution: Parity with two layer feedforward nets
Neural Networks
Learning Algorithms: Theory and Applications in Signal Processing
Learning Algorithms: Theory and Applications in Signal Processing
Graph Partitioning Using Learning Automata
IEEE Transactions on Computers
An adaptive call admission algorithm for cellular networks
Computers and Electrical Engineering
Capabilities of a four-layered feedforward neural network: four layers versus three
IEEE Transactions on Neural Networks
An iterative pruning algorithm for feedforward neural networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
A new evolutionary system for evolving artificial neural networks
IEEE Transactions on Neural Networks
A note on the learning automata based algorithms for adaptive parameter selection in PSO
Applied Soft Computing
Learning automata based dynamic guard channel algorithms
Computers and Electrical Engineering
A hybrid neural network model based reinforcement learning agent
ISNN'10 Proceedings of the 7th international conference on Advances in Neural Networks - Volume Part I
Hi-index | 0.00 |
There is no method to determine the optimal topology for multi-layer neural networks for a given problem. Usually the designer selects a topology for the network and then trains it. Since determination of the optimal topology of neural networks belongs to class of NP-hard problems, most of the existing algorithms for determination of the topology are approximate. These algorithms could be classified into four main groups: pruning algorithms, constructive algorithms, hybrid algorithms and evolutionary algorithms. These algorithms can produce near optimal solutions. Most of these algorithms use hill-climbing method and may be stuck at local minima. In this article, we first introduce a learning automaton and study its behaviour and then present an algorithm based on the proposed learning automaton, called survival algorithm, for determination of the number of hidden units of three layers neural networks. The survival algorithm uses learning automata as a global search method to increase the probability of obtaining the optimal topology. The algorithm considers the problem of optimization of the topology of neural networks as object partitioning rather than searching or parameter optimization as in existing algorithms. In survival algorithm, the training begins with a large network, and then by adding and deleting hidden units, a near optimal topology will be obtained. The algorithm has been tested on a number of problems and shown through simulations that networks generated are near optimal.