AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Behind Deep Blue: Building the Computer that Defeated the World Chess Champion
Behind Deep Blue: Building the Computer that Defeated the World Chess Champion
Algorithms and assessment in computer poker
Algorithms and assessment in computer poker
A near-optimal strategy for a heads-up no-limit Texas Hold'em poker tournament
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Pattern Classification in No-Limit Poker: A Head-Start Evolutionary Approach
CAI '07 Proceedings of the 20th conference of the Canadian Society for Computational Studies of Intelligence on Advances in Artificial Intelligence
An Experimental Approach to Online Opponent Modeling in Texas Hold'em Poker
SBIA '08 Proceedings of the 19th Brazilian Symposium on Artificial Intelligence: Advances in Artificial Intelligence
Opponent Modelling in Texas Hold'em Poker as the Key for Success
Proceedings of the 2008 conference on ECAI 2008: 18th European Conference on Artificial Intelligence
The WEKA data mining software: an update
ACM SIGKDD Explorations Newsletter
Monte-Carlo Tree Search in Poker Using Expected Reward Distributions
ACML '09 Proceedings of the 1st Asian Conference on Machine Learning: Advances in Machine Learning
AIS'12 Proceedings of the Third international conference on Autonomous and Intelligent Systems
Hi-index | 0.00 |
The development of competitive artificial Poker players is a challenge toArtificial Intelligence (AI) because the agent must deal with unreliable information and deception which make it essential to model the opponents to achieve good results. In this paper we propose the creation of an artificial Poker player through the analysis of past games between human players, with money involved. To accomplish this goal, we defined a classification problem that associates a given game state with the action that was performed by the player. To validate and test the defined player model, an agent that follows the learned tactic was created. The agent approximately follows the tactics from the human players, thus validating this model. However, this approach alone is insufficient to create a competitive agent, as generated strategies are static, meaning that they can't adapt to different situations. To solve this problem, we created an agent that uses a strategy that combines several tactics from different players. By using the combined strategy, the agentgreatly improved its performance against adversaries capable of modeling opponents.