Dynamic pricing strategies under a finite time horizon
Proceedings of the 3rd ACM conference on Electronic Commerce
Decision procedures for multiple auctions
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Dynamic Pricing on the Internet: Theory and Simulations
Electronic Commerce Research
Sequential auctions for objects with common and private values
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Learning social preferences in games
AAAI'04 Proceedings of the 19th national conference on Artifical intelligence
Efficient Bidding Strategies for Simultaneous Cliff-Edge Environments
IAT '06 Proceedings of the IEEE/WIC/ACM international conference on Intelligent Agent Technology
Resolving crises through automated bilateral negotiations
Artificial Intelligence
Modeling human decision making in cliff-edge environments
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Gender-sensitive automated negotiators
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
Can automated agents proficiently negotiate with humans?
Communications of the ACM - Amir Pnueli: Ahead of His Time
Agent decision-making in open mixed networks
Artificial Intelligence
The effect of expression of anger and happiness in computer agents on negotiations with humans
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
An Adaptive Agent for Negotiating with People in Different Cultures
ACM Transactions on Intelligent Systems and Technology (TIST)
Efficient bidding strategies for Cliff-Edge problems
Autonomous Agents and Multi-Agent Systems
Hi-index | 0.00 |
This paper proposes an efficient agent for competing in Cliff Edge (CE) environments, such as sealed-bid auctions, dynamic pricing and the ultimatum game. The agent competes in one-shot CE interactions repeatedly, each time against a different human opponent, and its performance is evaluated based on all the interactions in which it participates. The agent, which learns the general pattern of the population's behavior, does not apply any examples of previous interactions in the environment, neither of other competitors nor its own. We propose a generic approach which competes in different CE environments under the same configuration, with no knowledge about the specific rules of each environment. The underlying mechanism of the proposed agent is a new meta-algorithm, Deviated Virtual Learning (DVL), which extends existing methods to efficiently cope with environments comprising a large number of optional decisions at each decision point. Experiments comparing the performance of the proposed algorithm with algorithms taken from the literature, as well as another intuitive meta-algorithm, reveal a significant superiority of the former in average payoff and stability. In addition, the agent performed better than human competitors executing the same task.