A temporal calculus of communicating systems
CONCUR '90 Proceedings on Theories of concurrency : unification and extension: unification and extension
Modeling and verification of randomized distributed real-time systems
Modeling and verification of randomized distributed real-time systems
Computer networks (3rd ed.)
State-space support for path-based reward variables
IPDS '98 Proceedings of the third IEEE international performance and dependability symposium on International performance and dependability symposium
Communication and Concurrency
Smalltalk-80: The Language
Finite State Markovian Decision Processes
Finite State Markovian Decision Processes
A Generalized Timed Petri Net Model for Performance Analysis
International Workshop on Timed Petri Nets
An Algebra-Based Method to Associate Rewards with EMPA Terms
ICALP '97 Proceedings of the 24th International Colloquium on Automata, Languages and Programming
Model Checking of Probabalistic and Nondeterministic Systems
Proceedings of the 15th Conference on Foundations of Software Technology and Theoretical Computer Science
An Overview and Synthesis on Timed Process Algebras
CAV '91 Proceedings of the 3rd International Workshop on Computer Aided Verification
Verifying Continuous Time Markov Chains
CAV '96 Proceedings of the 8th International Conference on Computer Aided Verification
How to Specify and Verify the Long-Run Average Behavior of Probabilistic Systems
LICS '98 Proceedings of the 13th Annual IEEE Symposium on Logic in Computer Science
Performance modelling of a network processor using POOSL
Computer Networks: The International Journal of Computer and Telecommunications Networking - Network processors
Performance model checking scenario-aware dataflow
FORMATS'11 Proceedings of the 9th international conference on Formal modeling and analysis of timed systems
Hi-index | 0.00 |
Today many formalisms exist for specifying complex Markov chains. In contrast, formalisms for specifying rewards, enabling the analysis of long-run average performance properties, have remained quite primitive. Basically, they only support the analysis of relatively simple performance metrics that can be expressed as long-run averages of atomic rewards, i.e. rewards that are deductible directly from the individual states of the initial Markov chain specification. To deal with complex performance metrics that are dependent on the accumulation of atomic rewards over sequences of states, the initial specification has to be extended explicitly to provide the required state information.To solve this problem, we introduce in this paper a new formalism of temporal rewards that allows complex quantitative properties to be expressed in terms of temporal reward formulas. Together, an initial (discrete-time) Markov chain and the temporal reward formulas implicitly define an extended Markov chain that allows the determination of the quantitative property by traditional techniques for computing long-run averages. A method to construct the extended chain is given and it is proved that this method leaves long-run averages invariant for atomic rewards. We further establish conditions that guarantee the preservation of ergodicity. The construction method can build the extended chain in an on-the-fly manner allowing for efficient simulation.