Game Theoretic Stochastic Routing for Fault Tolerance and Security in Computer Networks
IEEE Transactions on Parallel and Distributed Systems
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
MILCOM'09 Proceedings of the 28th IEEE conference on Military communications
Brief On terminating Markov decision processes with a risk-averse objective function
Automatica (Journal of IFAC)
Stochastic Optimization of Sensor Placement for Diver Detection
Operations Research
Hi-index | 0.00 |
We consider dynamic, two-player, zero-sum games where the "minimizing" player seeks to drive an underlying finite-state dynamic system to a special terminal state along a least expected cost path. The "maximizer" seeks to interfere with the minimizer's progress so as to maximize the expected total cost. We consider, for the first time, undiscounted finite-state problems, with compact action spaces, and transition costs that are not strictly positive. We admit that there are policies for the minimizer which permit the maximizer to prolong the game indefinitely. Under assumptions which generalize deterministic shortest path problems, we establish (i) the existence of a real-valued equilibrium cost vector achievable with stationary policies for the opposing players and (ii) the convergence of value iteration and policy iteration to the unique solution of Bellman's equation.