An Evolutionary Solution for Cooperative and Competitive Mobile Agents
AICI '09 Proceedings of the International Conference on Artificial Intelligence and Computational Intelligence
Evaluation of techniques for a learning-driven modeling methodology in multiagent simulation
MATES'10 Proceedings of the 8th German conference on Multiagent system technologies
Evolution for modeling: a genetic programming framework for sesam
Proceedings of the 13th annual conference companion on Genetic and evolutionary computation
Generating inspiration for agent design by reinforcement learning
Information and Software Technology
Hi-index | 0.00 |
The proposed method is implemented in three steps: first, when a variation in environment is perceived, agents take appropriate actions. Second, the behaviors are stimulated and controlled through communication with other agents. Finally, the most frequently stimulated behavior is adopted as a group behavior strategy. In this paper, two different reward models, reward model 1 and reward model 2, are applied. Each reward model is designed to consider the reinforcement or constraint of behaviors. In competitive agent environments, the behavior considered to be advantageous is reinforced as adding reward values. On the contrary, the behavior considered to be disadvantageous is constrained by reducing the reward values. The validity of this strategy is verified through simulation.