Markov-optimal sensing policy for user state estimation in mobile devices

  • Authors:
  • Yi Wang;Bhaskar Krishnamachari;Qing Zhao;Murali Annavaram

  • Affiliations:
  • University of Southern California, Los Angeles;University of Southern California, Los Angeles;University of California, Davis;University of Southern California, Los Angeles

  • Venue:
  • Proceedings of the 9th ACM/IEEE International Conference on Information Processing in Sensor Networks
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Mobile device based human-centric sensing and user state recognition provide rich contextual information for various mobile applications and services. However, continuously capturing this contextual information consumes significant amount of energy and drains mobile device battery quickly. In this paper, we propose a computationally efficient algorithm to obtain the optimal sensor sampling policy under the assumption that the user state transition is Markovian. This Markov-optimal policy minimizes user state estimation error while satisfying a given energy consumption budget. We first compare the Markov-optimal policy with uniform periodic sensing for Markovian user state transitions and show that the improvements obtained depend upon the underlying state transition probabilities. We then apply the algorithm to two different sets of real experimental traces pertaining to user motion change and inter-user contacts and show that the Markov-optimal policy leads to an approximately 20% improvement over the naive uniform sensing policy.