Distributed rational decision making
Multiagent systems
Information Sciences: an International Journal
Socially conscious decision-making
AGENTS '00 Proceedings of the fourth international conference on Autonomous agents
Minds and Machines
Multiagent Reinforcement Learning: Theoretical Framework and an Algorithm
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Convergence Problems of General-Sum Multiagent Reinforcement Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Reinforcement learning in distributed domains: beyond team games
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
Hi-index | 0.00 |
Individual rationality, or doing what is best for oneself, is a standard model used to explain and predict human behavior, and von Neumann–Morgenstern game theory is the classical mathematical formalization of this theory in multiple-agent settings. Individual rationality, however, is an inadequate model for the synthesis of artificial social systems where cooperation is essential, since it does not permit the accommodation of group interests other than as aggregations of individual interests. Satisficing game theory is based upon a well-defined notion of being good enough, and does accommodate group as well as individual interests through the use of conditional preference relationships, whereby a decision maker is able to adjust its preferences as a function of the preferences, and not just the options, of others. This new theory is offered as an alternative paradigm to construct artificial societies that are capable of complex behavior that goes beyond exclusive self interest.