Model predictive control: theory and practice—a survey
Automatica (Journal of IFAC)
Planning and acting in partially observable stochastic domains
Artificial Intelligence
Wireless integrated network sensors
Communications of the ACM
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Wireless sensor networks for habitat monitoring
WSNA '02 Proceedings of the 1st ACM international workshop on Wireless sensor networks and applications
Sequential Optimality and Coordination in Multiagent Systems
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
Decision-Theoretic Control of Planetary Rovers
Revised Papers from the International Seminar on Advances in Plan-Based Control of Robotic Agents,
Artificial Intelligence: A Modern Approach
Artificial Intelligence: A Modern Approach
Dynamic Control and Power Management Algorithm For Continuous Wireless Monitoring in Sensor Networks
LCN '04 Proceedings of the 29th Annual IEEE International Conference on Local Computer Networks
Poster abstract: MDP framework for sensor network coordination
IPSN '09 Proceedings of the 2009 International Conference on Information Processing in Sensor Networks
Stochastic fault trees for cross-layer power management of WSN monitoring systems
ETFA'09 Proceedings of the 14th IEEE international conference on Emerging technologies & factory automation
ICICS'09 Proceedings of the 7th international conference on Information, communications and signal processing
Localized policy-based target tracking using wireless sensor networks
ACM Transactions on Sensor Networks (TOSN)
Optimizing battery lifetime-fidelity tradeoffs in BSNs using personal activity profiles
Proceedings of the 7th International Conference on Body Area Networks
Optimal planning of sensor networks for asset tracking in hospital environments
Decision Support Systems
Hi-index | 0.00 |
Optimal use of energy is a primary concern in fielddeployable sensor networks. Artificial intelligence algorithms offer the capability to improve the performance or sensor networks in dynamic environments by minimizing energy utilization while not compromising overall performance. However, they have been used only to a limited extent in sensor networks primarily due to their expensive computing requirements. We describe the use of Markov decision processes for the adaptive control of sensor sampling rates in a sensor network used for human health monitoring. The MDP controller is designed to gather optimal information about the patient's health while guaranteeing a minimum lifetime of the system. At every control step, the MDP controller varies the frequency at which the data is collected according to the criticality of the patient's health at that time. We present a stochastic model that is used to generate the optimal policy offline. In cases where a model of the observed process is not available a-priori. we descrihe a Q-learning technique to learn the control policy, by using a pre-existing master controller. Simulation results that illustrate the performance of the controller are presented.