Markov decision processes for control of a sensor network-based health monitoring system

  • Authors:
  • Anand Panangadan;Syed Muhammad Ali;Ashit Talukder

  • Affiliations:
  • Childrens Hospital Los Angeles, University of Southern California, Los Angeles, California;Childrens Hospital Los Angeles, University of Southern California, Los Angeles, California;Childrens Hospital Los Angeles, University of Southern California, Los Angeles, California

  • Venue:
  • IAAI'05 Proceedings of the 17th conference on Innovative applications of artificial intelligence - Volume 3
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Optimal use of energy is a primary concern in fielddeployable sensor networks. Artificial intelligence algorithms offer the capability to improve the performance or sensor networks in dynamic environments by minimizing energy utilization while not compromising overall performance. However, they have been used only to a limited extent in sensor networks primarily due to their expensive computing requirements. We describe the use of Markov decision processes for the adaptive control of sensor sampling rates in a sensor network used for human health monitoring. The MDP controller is designed to gather optimal information about the patient's health while guaranteeing a minimum lifetime of the system. At every control step, the MDP controller varies the frequency at which the data is collected according to the criticality of the patient's health at that time. We present a stochastic model that is used to generate the optimal policy offline. In cases where a model of the observed process is not available a-priori. we descrihe a Q-learning technique to learn the control policy, by using a pre-existing master controller. Simulation results that illustrate the performance of the controller are presented.