Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
Ill-conditioning in neural network training problems
SIAM Journal on Scientific Computing
An introduction to fuzzy control
An introduction to fuzzy control
Pattern Recognition with Fuzzy Objective Function Algorithms
Pattern Recognition with Fuzzy Objective Function Algorithms
A Note on Learning Automata Based Schemes for Adaptation of BP Parameters
IDEAL '00 Proceedings of the Second International Conference on Intelligent Data Engineering and Automated Learning, Data Mining, Financial Engineering, and Intelligent Agents
Hi-index | 0.00 |
Many types of artificial neural networks require training to "learn" the tasks they will be called upon to do, which basically translates into properly setting the weights of the interneuron connections or "synapses" so that the network will give the desired output. Values of certain parameters in the algorithms used to train the networks must be chosen and then adapted or optimized carefully. Error backpropagation training of the multilayer perceptron, for example, requires judicious choice of the step and momentum parameters. The ART network requires choice of a vigilance parameter, while a learning rate must be selected for Kohonen networks. These parameters are typically chosen and adapted by a human "neural smith" using heuristic rules. For example, a smooth error surface in the backpropagation training of a layered perceptron suggests use of a long step, whereas a steep surface indicates a need for smaller steps. The heuristic description of the parameter selection is fuzzy. The terms "smooth," "long," "steep," and "smaller" are each fuzzy linguistic variables that can be quantified into a fuzzy inference engine. A properly designed fuzzy controller, then, can relieve the neural smith of some labor, and get the job done quicker and better. The concept has broader possible applications, beyond neural network training, to parameter selection and optimization generally.