Boolean Feature Discovery in Empirical Learning
Machine Learning
Proceedings of the seventh international conference (1990) on Machine learning
Do the right thing: studies in limited rationality
Do the right thing: studies in limited rationality
Technical Note: \cal Q-Learning
Machine Learning
Using knowledge about the opponent in game-tree search
Using knowledge about the opponent in game-tree search
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Learning in multiagent systems
Multiagent systems
Strategies anticipating a difference in search depth using opponent-model search
Theoretical Computer Science
Probabilistic opponent-model search
Information Sciences: an International Journal - Heuristic Search and Computer Game Playing
Feature Generation Using General Constructor Functions
Machine Learning
Bayesian Update of Recursive Agent Models
User Modeling and User-Adapted Interaction
Exploration Strategies for Model-based Learning in Multi-agent Systems: Exploration Strategies
Autonomous Agents and Multi-Agent Systems
Machine Learning
Modeling Auction Price Uncertainty Using Boosting-based Conditional Density Estimation
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Learn Your Opponent's Strategy (in Polynominal Time)!
IJCAI '95 Proceedings of the Workshop on Adaption and Learning in Multi-Agent Systems
Adaptation and Learning in Multi-Agent Systems: Some Remarks and a Bibliography
IJCAI '95 Proceedings of the Workshop on Adaption and Learning in Multi-Agent Systems
Defining and Using Ideal Teammate and Opponent Agent Models
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
Efficient algorithms for learning to play repeated games against computationally bounded adversaries
FOCS '95 Proceedings of the 36th Annual Symposium on Foundations of Computer Science
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Optimizing decision quality with contract algorithms
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Learning models of intelligent agents
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Incorporating opponent models into adversary search
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Generation of attributes for learning algorithms
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Machine Learning
An adversarial environment model for bounded rational agents in zero-sum interactions
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Modeling social preferences in multi-player games
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
The adversarial activity model for bounded rational agents
Autonomous Agents and Multi-Agent Systems
Hi-index | 0.00 |
Agents in a competitive interaction can greatly benefit from adapting to a particular adversary, rather than using the same general strategy against all opponents. One method of such adaptation isOpponent Modeling, in which a model of an opponent is acquired and utilized as part of the agent's decision procedure in future interactions with this opponent. However, acquiring an accurate model of a complex opponent strategy may be computationally infeasible. In addition, if the learned model is not accurate, then using it to predict the opponent's actions may potentially harm the agent's strategy rather than improving it. We thus define the concept ofopponent weakness, and present a method for learning a model of this simpler concept. We analyze examples of past behavior of an opponent in a particular domain, judging its actions using a trusted judge. We then infer aweakness model based on the opponent's actions relative to the domain state, and incorporate this model into our agent's decision procedure. We also make use of a similar self-weakness model, allowing the agent to prefer states in which the opponent is weak and our agent strong; where we have arelative advantage over the opponent. Experimental results spanning two different test domains demonstrate the agents' improved performance when making use of the weakness models.