Optimal stationary policies in general state space Markov decision chains with finite action sets
Mathematics of Operations Research
Dynamic Pricing for Network Service: Equilibrium and Stability
Management Science
Dynamic Programming and Optimal Control
Dynamic Programming and Optimal Control
Dynamic Control of a Multiclass Queue with Thin Arrival Streams
Operations Research
A Partially Observed Markov Decision Process for Dynamic Pricing
Management Science
Manufacturing & Service Operations Management
Dynamic Pricing for Nonperishable Products with Demand Learning
Operations Research
Dynamic Pricing with a Prior on Market Response
Operations Research
Dynamic Pricing Under a General Parametric Choice Model
Operations Research
Hi-index | 0.00 |
The revenue management literature for queues typically assumes that providers know the distribution of customer demand attributes. We study an observable M/M/1 queue that serves an unknown proportion of patient and impatient customers. The provider has a Bernoulli prior on this proportion, corresponding to an optimistic or pessimistic scenario. For every queue length, she chooses a low or a high price, or turns customers away. Only the high price is informative. The optimal Bayesian price for a queue state is belief-dependent if the optimal policies for the underlying scenarios disagree at that queue state; in this case the policy has a belief-threshold structure. The optimal Bayesian pricing policy as a function of queue length has a zone or, nested-threshold structure. Moreover, the price convergence under the optimal Bayesian policy is sensitive to the system size, i.e., the maximum queue length. We identify two cases: prices converge 1 almost surely to the optimal prices in either scenario or 2 with positive probability to suboptimal prices. Only Case 2 is consistent with the typical incomplete learning outcome observed in the literature.