Approximation algorithms for stochastic orienteering

  • Authors:
  • Anupam Gupta;Ravishankar Krishnaswamy;Viswanath Nagarajan;R. Ravi

  • Affiliations:
  • Carnegie Mellon University, Pittsburgh PA;Carnegie Mellon University, Pittsburgh PA;IBM T. J. Watson Research Center, Yorktown Heights, NY;Carnegie Mellon University, Pittsburgh PA

  • Venue:
  • Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the Stochastic Orienteering problem, we are given a metric, where each node also has a job located there with some deterministic reward and a random size. (Think of the jobs as being chores one needs to run, and the sizes as the amount of time it takes to do the chore.) The goal is to adaptively decide which nodes to visit to maximize total expected reward, subject to the constraint that the total distance traveled plus the total size of jobs processed is at most a given budget of B. (I.e., we get reward for all those chores we finish by the end of the day). The (random) size of a job is not known until it is completely processed. Hence the problem combines aspects of both the stochastic knapsack problem with uncertain item sizes and the deterministic orienteering problem of using a limited travel time to maximize gathered rewards located at nodes. In this paper, we present a constant-factor approximation algorithm for the best non-adaptive policy for the Stochastic Orienteering problem. We also show a small adaptivity gap---i.e., the existence of a non-adaptive policy whose reward is at least an Ω(1/log log B) fraction of the optimal expected reward---and hence we also get an O(log log B)-approximation algorithm for the adaptive problem. Finally we address the case when the node rewards are also random and could be correlated with the waiting time, and give a non-adaptive policy which is an O(log n logB)-approximation to the best adaptive policy on n-node metrics with budget B.