A Partially Observed Markov Decision Process for Dynamic Pricing

  • Authors:
  • Yossi Aviv;Amit Pazgal

  • Affiliations:
  • Olin School of Business, Washington University, St. Louis, Missouri 63130;Olin School of Business, Washington University, St. Louis, Missouri 63130

  • Venue:
  • Management Science
  • Year:
  • 2005

Quantified Score

Hi-index 0.02

Visualization

Abstract

In this paper, we develop a stylized partially observed Markov decision process (POMDP) framework to study a dynamic pricing problem faced by sellers of fashion-like goods. We consider a retailer that plans to sell a given stock of items during a finite sales season. The objective of the retailer is to dynamically price the product in a way that maximizes expected revenues. Our model brings together various types of uncertainties about the demand, some of which are resolvable through sales observations. We develop a rigorous upper bound for the seller's optimal dynamic decision problem and use it to propose an active-learning heuristic pricing policy. We conduct a numerical study to test the performance of four different heuristic dynamic pricing policies in order to gain insight into several important managerial questions that arise in the context of revenue management.