Improved Algorithms for Linear Inequalities with Two Variables per Inequality
SIAM Journal on Computing
Weak monotonicity suffices for truthfulness on convex domains
Proceedings of the 6th ACM conference on Electronic commerce
Truthful germs are contagious: a local to global characterization of truthfulness
Proceedings of the 9th ACM conference on Electronic commerce
Collusion-Resistant Mechanisms with Verification Yielding Optimal Solutions
ESA '08 Proceedings of the 16th annual European symposium on Algorithms
The power of verification for one-parameter agents
Journal of Computer and System Sciences
Optimal collusion-resistant mechanisms with verification
Proceedings of the 10th ACM conference on Electronic commerce
Combinatorial auctions with verification are tractable
ESA'10 Proceedings of the 18th annual European conference on Algorithms: Part II
Alternatives to truthfulness are hard to recognize
Autonomous Agents and Multi-Agent Systems
Mechanism design with partial verification and revelation principle
Autonomous Agents and Multi-Agent Systems
New constructions of mechanisms with verification
ICALP'06 Proceedings of the 33rd international conference on Automata, Languages and Programming - Volume Part I
Winner-imposing strategyproof mechanisms for multiple Facility Location games
Theoretical Computer Science
Hi-index | 0.00 |
Algorithmic mechanism design is concerned with designing algorithms for settings where inputs are controlled by selfish agents, and the center needs to motivate the agents to report their true values. In this paper, we study scenarios where the center may be able to verify whether the agents report their preferences (types) truthfully. We first consider the standard model of mechanism design with partial verification, where the set of types that an agent can report is a function of his true type. We explore inherent limitations of this model; in particular, we show that the famous Gibbard--Satterthwaite impossibility result holds even if a manipulator can only lie by swapping two adjacent alternatives in his vote. Motivated by these negative results, we then introduce a richer model of verification, which we term mechanism design with probabilistic verification. In our model, an agent may report any type, but will be caught with some probability that may depend on his true type, the reported type, or both; if an agent is caught lying, he will not get his payment and may be fined. We characterize the class of social choice functions that can be truthfully implemented in this model. We then proceed to study the complexity of finding an optimal individually rational implementation, i.e., one that minimizes the center's expected payment while guaranteeing non-negative utility to the agent, both for truthful and for non-truthful implementation. Our hardness result for non-truthful implementation answers an open question recently posed by Auletta et al. [2011].