Exploiting processor heterogeneity for energy efficient context inference on mobile phones

  • Authors:
  • Chenguang Shen;Supriyo Chakraborty;Kasturi Rangan Raghavan;Haksoo Choi;Mani B. Srivastava

  • Affiliations:
  • University of California, Los Angeles;University of California, Los Angeles;University of California, Los Angeles;University of California, Los Angeles;University of California, Los Angeles

  • Venue:
  • Proceedings of the Workshop on Power-Aware Computing and Systems
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

In recent years we have seen the emergence of context-aware mobile sensing apps which employ machine learning algorithms on real-time sensor data to infer user behaviors and contexts. These apps are typically optimized for power and performance on the app processors of mobile platforms. However, modern mobile platforms are sophisticated system on chips (SoCs) where the main app processors are complemented by multiple co-processors. Recently chip vendors have undertaken nascent efforts to make these previously hidden co-processors such as the digital signal processors (DSPs) programmable. In this paper, we explore the energy and performance implications of off-loading the computation associated with machine learning algorithms in context-aware apps to DSPs embedded in mobile SoCs. Our results show a 17% reduction in a TI OMAP4 based mobile platform's energy usage from off-loading context classification computation to the DSP core with indiscernible latency overhead. We also describe the design of a run-time system service for energy efficient context inference on Android devices, which takes parameters from the app to instantiate the classification model and schedules the execution on the DSP or app processor as specified by the app.