Computational organization theory
Multiagent systems
Satisficing Games and Decision Making: With Applications to Engineering and Computer Science
Satisficing Games and Decision Making: With Applications to Engineering and Computer Science
The Dynamic Selection of Coordination Mechanisms
Autonomous Agents and Multi-Agent Systems
Folk Psychology for Human Modelling: Extending the BDI Paradigm
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 1
Modeling one human decision maker with a multi-agent system: the CODAGE approach
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Nearest-Neighbor Methods in Learning and Vision: Theory and Practice (Neural Information Processing)
Nearest-Neighbor Methods in Learning and Vision: Theory and Practice (Neural Information Processing)
Journal of Artificial Intelligence Research
Social utility Functions-part I: theory
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Multiattribute decision aid with extended ISMAUT
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
The theory of social functions: challenges for computational social science and multi-agent learning
Cognitive Systems Research
Hi-index | 0.00 |
Simulations have become a great tool for research in the natural sciences. However their potential has not been reached far enough in the social sciences. This is in part due to the difficulty in simulating human decision making and reproducing human-like behavior. Recent advances in neo-classical decision making have defined specific differences between the decision making capabilities of rational agents and humans as well as speculations into the cause. Presented is a Q-learning model for simulating human-like decision making based upon the intuition deliberation model proposed by psychologists Kahneman and Tversky. The model is tested against the classic economic bargaining game. In this game humans and rational agents consistently converge onto distinctly different strategies. Our experiments show that a selfish agent defers from the strategy of the rational agent and is more similar to human strategy.