Second-order Markov reward models driven by QBD processes

  • Authors:
  • Nigel G. Bean;MałGorzata M. O'Reilly;Yong Ren

  • Affiliations:
  • Applied Mathematics, University of Adelaide, SA 5005, Australia;School of Mathematics, University of Tasmania, GPO Box 252C-37, Hobart, Tasmania 7001, Australia;School of Mathematics, University of Tasmania, GPO Box 252C-37, Hobart, Tasmania 7001, Australia

  • Venue:
  • Performance Evaluation
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Second-order reward models are an important class of models for evaluating the performance of real-life systems in which the reward measure fluctuates according to some underlying noise. These models consist of a Markov chain driving the evolution of the system, and a continuous reward variable representing its performance. Thus far, only models with a finite number of states have been studied. We consider second-order reward models driven by Quasi-birth-and-death processes, a class of block-structured Markov chains with infinitely many states. We derive the expressions for the Laplace-Stieltjes transforms of the accumulated reward and demonstrate how they can be efficiently evaluated. We use our results to analyse a simple example and, in doing so, show that the second-order feature can make a significant difference to the accumulated reward. The inclusion of the second-order feature also creates new difficulties which require the development of new conditions in the analysis.