Basic Ideas for Event-Based Optimization of Markov Systems

  • Authors:
  • Xi-Ren Cao

  • Affiliations:
  • Department of Electrical and Electronic Engineering, Hong Kong University of Science and Technology, Kowloon, Hong Kong

  • Venue:
  • Discrete Event Dynamic Systems
  • Year:
  • 2005

Quantified Score

Hi-index 0.02

Visualization

Abstract

The goal of this paper is two-fold: First, we present a sensitivity point of view on the optimization of Markov systems. We show that Markov decision processes (MDPs) and the policy-gradient approach, or perturbation analysis (PA), can be derived easily from two fundamental sensitivity formulas, and such formulas can be flexibly constructed, by first principles, with performance potentials as building blocks. Second, with this sensitivity view we propose an event-based optimization approach, including the event-based sensitivity analysis and event-based policy iteration. This approach utilizes the special feature of a system characterized by events and illustrates how the potentials can be aggregated using the special feature and how the aggregated potential can be used in policy iteration. Compared with the traditional MDP approach, the event-based approach has its advantages: the number of aggregated potentials may scale to the system size despite that the number of states grows exponentially in the system size, this reduces the policy space and saves computation; the approach does not require actions at different states to be independent; and it utilizes the special feature of a system and does not need to know the exact transition probability matrix. The main ideas of the approach are illustrated by an admission control problem.