Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Affective computing
Statistical and neural classifiers: an integrated approach to design
Statistical and neural classifiers: an integrated approach to design
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Classifier's Complexity Control while Training Multilayer Perceptrons
Proceedings of the Joint IAPR International Workshops on Advances in Pattern Recognition
Proceedings of the Second European Workshop on Genetic Programming
Functional model of criminality: simulation study
CONTEXT'05 Proceedings of the 5th international conference on Modeling and Using Context
Social Organization of Evolving Multiple Classifier System Functioning in Changing Environments
ICANNGA '07 Proceedings of the 8th international conference on Adaptive and Natural Computing Algorithms, Part I
Hi-index | 0.00 |
The paper considers supervised learning algorithm of nonlinear perceptron with dynamic targets adjustment which assists in faster learning and cognition. A difference between targets of the perceptron corresponding to objects of the first and second categories is associated with stimulation strength. A feedback chain that controls the difference between targets is interpreted as synthetic emotions. In a population of artificial agents that ought to learn similar pattern classification tasks, presence of the emotions helps a larger fraction of the agents to survive. We found that optimal level of synthetic emotions depends on difficulty of the pattern recognition task and requirements to learning quality and confirm Yerkes-Dodson law found in psychology.