Technical Note: \cal Q-Learning
Machine Learning
C4.5: programs for machine learning
C4.5: programs for machine learning
Using ILP-Systems for Verification and Validation of Multi-agent Systems
ILP '98 Proceedings of the 8th International Workshop on Inductive Logic Programming
Simulation for the Social Scientist
Simulation for the Social Scientist
Rule Extraction from Recurrent Neural Networks: A Taxonomy and Review
Neural Computation
Simulation Modeling and Analysis (McGraw-Hill Series in Industrial Engineering and Management)
Simulation Modeling and Analysis (McGraw-Hill Series in Industrial Engineering and Management)
Generalized model learning for reinforcement learning in factored domains
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
ADELFE: a methodology for adaptive multi-agent systems engineering
ESAW'02 Proceedings of the 3rd international conference on Engineering societies in the agents world III
Introducing pattern reuse in the design of multi-agent systems
NODe'02 Proceedings of the NODe 2002 agent-related conference on Agent technologies, infrastructures, tools, and applications for E-services
Evaluation of techniques for a learning-driven modeling methodology in multiagent simulation
MATES'10 Proceedings of the 8th German conference on Multiagent system technologies
Evolution for modeling: a genetic programming framework for sesam
Proceedings of the 13th annual conference companion on Genetic and evolutionary computation
Generating inspiration for agent design by reinforcement learning
Information and Software Technology
PROGRAMMING AGENT BEHAVIOR BY LEARNING IN SIMULATION MODELS
Applied Artificial Intelligence - Eighth European Workshop on Multi-Agent Systems EUMAS 2010
Hi-index | 0.00 |
Due to the "generative" nature of the macro phenomena, agent-based systems require experience from the modeler to determine the proper low-level agent behavior. Adaptive and learning agents can facilitate this task: Partial or preliminary learnt versions of the behavior can serve as inspiration for the human modeler. Using a simulation process we develop agents that explore sensors and actuators inside a given environment. The exploration is guided by the attribution of rewards to their actions, expressed in an objective function. These rewards are used to develop a situation-action mapping, later abstracted to a human-readable format. In this contribution we test the robustness of a decision-tree-representation of the agent's decision-making process with regards to changes in the objective function. The importance of this study lies on understanding how sensitive the definition of the objective function is to the final abstraction of the model, not merely to a performance evaluation.