Evaluation of learning algorithms for optimal policy representation in sensor-network based human health monitoring systems

  • Authors:
  • Shuping Liu;Mi Zhang

  • Affiliations:
  • Ming Hsieh Department of Electrical Engineering, University of Southern California, Los Angeles, CA;Ming Hsieh Department of Electrical Engineering, University of Southern California, Los Angeles, CA

  • Venue:
  • ICICS'09 Proceedings of the 7th international conference on Information, communications and signal processing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The paper suggests the methods for learning compact representation of the optimal decision policies in a Markov Decision Process (MDP) framework for sensor-network based human health monitoring systems. The learning of a small decision policy is key to deploying the model in small sensor nodes with limited memory. The decision process enables distributed sensor nodes to adapt their sampling rates in response to changing event criticality and the availability of resources (energy) at each node. The globally optimal policy is first calculated offline using an MDP and deployed onto each node. However, the space complexity of the representation is exponential in the number of the sensor nodes and discretization grain of the problem. In this paper, we compare the capability of the compact representation of the optimal decision policy by using different base supervised learners. The results show that unpruned decision trees and high confidence pruned decision trees provide the lowest error rate while the required node number of the decision tree is enough small to be stored in the sensors. Ensembles of lower-confidence trees are capable of perfect representation with only an order of magnitude increase in classifier size compared to individual pruned trees.