Approximation results in parallel machines stochastic scheduling
Annals of Operations Research
Turnpike optimality of Smith's Rule in parallel machines stochastic scheduling
Mathematics of Operations Research
On the value of adaptive solutions to stochastic scheduling problems
Mathematics of Operations Research
Approximation in stochastic scheduling: the power of LP-based priority policies
Journal of the ACM (JACM)
Approximation algorithms for stochastic scheduling problems
Approximation algorithms for stochastic scheduling problems
Models and Algorithms for Stochastic Online Scheduling
Mathematics of Operations Research
Approximation in preemptive stochastic online scheduling
ESA'06 Proceedings of the 14th conference on Annual European Symposium - Volume 14
Manufacturing & Service Operations Management
Dynamic Pricing for Nonperishable Products with Demand Learning
Operations Research
Dynamic Pricing with a Prior on Market Response
Operations Research
Hi-index | 0.00 |
We consider a scheduling problem in which two classes of independent jobs have to be processed non-preemptively by a single machine. The processing times of the jobs are assumed to be exponentially distributed with parameters depending on the class of each job. The objective is to minimize the sum of expected completion times. We adopt a Bayesian framework in which both job class parameters are assumed to be unknown. However, by processing jobs from the corresponding class, the scheduler can gradually learn about the value of these parameters, thereby enhancing the decision making in the future. For the traditional stochastic scheduling variant, in which the parameters are known, the policy that always processes a job with Shortest Expected Processing Time (SEPT) is an optimal policy. In this paper, we show that in the Bayesian framework the performance of SEPT is at most a factor 2 away from the performance of an optimal policy. Furthermore, we introduce a second policy learning-SEPT (ℓ-SEPT), which is an adaptive variant of SEPT. We show that ℓ-SEPT is no worse than SEPT and empirically outperforms SEPT. However, both policies have the same worst-case performance, that is, the bound of 2 is tight for both policies.