Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Leadtime-Inventory Trade-Offs in Assemble-To-Order Systems
Operations Research
On the Order Fill Rate in a Multi-Item, Base-Stock Inventory System
Operations Research
Inventory-Service Optimization in Configure-to-Order Systems
Manufacturing & Service Operations Management
Optimal Stock Allocation for a Capacitated Supply System
Management Science
Manufacturing & Service Operations Management
Hi-index | 0.01 |
We consider the optimal production and inventory control of an assemble-to-order system with m components, one end-product, and n customer classes. A control policy specifies when to produce each component and, whenever an order is placed, whether or not to satisfy it from on-hand inventory. We formulate the problem as a Markov decision process and characterize the structure of an optimal policy. We show that a base-stock production policy is optimal, but the base-stock level for each component is dynamic and depends on the inventory level of all other components (more specifically, it is nondecreasing). We show that the optimal inventory allocation for each component is a rationing policy with different rationing levels for different demand classes. The rationing levels for each component are dynamic and also nondecreasing in the inventory level of all other components. We compare the performance of the optimal policy to heuristic policies, including the commonly used base-stock policy with fixed base-stock levels, and find them to perform surprisingly well, especially for systems with lost sales.