Discovery of activity patterns using topic models
UbiComp '08 Proceedings of the 10th international conference on Ubiquitous computing
The Jigsaw continuous sensing engine for mobile phone applications
Proceedings of the 8th ACM Conference on Embedded Networked Sensor Systems
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
LittleRock: Enabling Energy-Efficient Continuous Sensing on Mobile Phones
IEEE Pervasive Computing
Balancing energy, latency and accuracy for mobile sensor data classification
Proceedings of the 9th ACM Conference on Embedded Networked Sensor Systems
Reflex: using low-power processors in smartphones without knowing them
ASPLOS XVII Proceedings of the seventeenth international conference on Architectural Support for Programming Languages and Operating Systems
Improving energy efficiency of personal sensing applications with heterogeneous multi-processors
Proceedings of the 2012 ACM Conference on Ubiquitous Computing
Supporting distributed execution of smartphone workloads on loosely coupled heterogeneous processors
HotPower'12 Proceedings of the 2012 USENIX conference on Power-Aware Computing and Systems
SymPhoney: a coordinated sensing flow execution engine for concurrent mobile sensing applications
Proceedings of the 10th ACM Conference on Embedded Network Sensor Systems
Hi-index | 0.00 |
In recent years we have seen the emergence of context-aware mobile sensing apps which employ machine learning algorithms on real-time sensor data to infer user behaviors and contexts. These apps are typically optimized for power and performance on the app processors of mobile platforms. However, modern mobile platforms are sophisticated system on chips (SoCs) where the main app processors are complemented by multiple co-processors. Recently chip vendors have undertaken nascent efforts to make these previously hidden co-processors such as the digital signal processors (DSPs) programmable. In this paper, we explore the energy and performance implications of off-loading the computation associated with machine learning algorithms in context-aware apps to DSPs embedded in mobile SoCs. Our results show a 17% reduction in a TI OMAP4 based mobile platform's energy usage from off-loading context classification computation to the DSP core with indiscernible latency overhead. We also describe the design of a run-time system service for energy efficient context inference on Android devices, which takes parameters from the app to instantiate the classification model and schedules the execution on the DSP or app processor as specified by the app.