Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Stochastic activity networks: formal definitions and concepts
Lectures on formal methods and performance analysis
Stochastic Well-Formed Colored Nets and Symmetric Modeling Applications
IEEE Transactions on Computers
The Möbius Framework and Its Implementation
IEEE Transactions on Software Engineering
Design of Experiments within the Mobius Modeling Environment
QEST '07 Proceedings of the Fourth International Conference on Quantitative Evaluation of Systems
A framework to design and solve Markov Decision Well-formed Net models
QEST '07 Proceedings of the Fourth International Conference on Quantitative Evaluation of Systems
Analysis of On-off policies in Sensor Networks Using Interacting Markovian Agents
PERCOM '08 Proceedings of the 2008 Sixth Annual IEEE International Conference on Pervasive Computing and Communications
Markov decision Petri net and Markov decision well-formed net formalisms
ICATPN'07 Proceedings of the 28th international conference on Applications and theory of Petri nets and other models of concurrency
Smart Sleeping Policies for Energy Efficient Tracking in Sensor Networks
IEEE Transactions on Signal Processing
Hi-index | 0.01 |
In this paper, we illustrate the use of different methods to support the design of a Wireless Sensor Network (WSN), by using as a case study a monitoring system that must track a moving object within a given area. The goal of the study is to find a good trade off between the power consumption and the object tracking reliability. Power saving can be achieved by periodically powering off some of the nodes for a given time interval. Of course nodes can detect the moving object only when they are on, so that the power management strategy can affect the ability to accurately track the object movements. We propose two models and the corresponding analysis and simulation tools, that can be used in a synergistic way: the first model is based on the Markov Decision Well-formed Net (MDWN) formalism while the second one is based on the Stochastic Activity Network (SAN) formalism. The MDWN model is more abstract and is used to compute an optimal power management strategy by solving a Markov Decision Process (MDP); the SAN model is more detailed and is used to perform extensive simulation (using the Möbius tool) in order to analyze different performance indices, both when applying the power management policy derived from the first model and when using different policies.