COMPUTING AVERAGE OPTIMAL CONSTRAINED POLICIES IN STOCHASTIC DYNAMIC PROGRAMMING

  • Authors:
  • Linn I. Sennott

  • Affiliations:
  • Department of Mathematics, Illinois State University, Normal, Illinois 61790-4520, E-mail: sennott@math.ilstu.edu

  • Venue:
  • Probability in the Engineering and Informational Sciences
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

A stochastic dynamic program incurs two types of cost: a service cost and a quality of service (delay) cost. The objective is to minimize the expected average service cost, subject to a constraint on the average quality of service cost. When the state space S is finite, we show how to compute an optimal policy for the general constrained problem under weak conditions. The development uses a Lagrange multiplier approach and value iteration. When S is denumerably infinite, we give a method for computation of an optimal policy, using a sequence of approximating finite state problems. The method is illustrated with two computational examples.