Dynamic and adaptive SPM management for a multi-task environment

  • Authors:
  • Weixing Ji;Ning Deng;Feng Shi;Qi Zuo;Jiaxin Li

  • Affiliations:
  • School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China;School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China;School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China;School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China;School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China

  • Venue:
  • Journal of Systems Architecture: the EUROMICRO Journal
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present a dynamic and adaptive scratchpad memory (SPM) management strategy targeting a multi-task environment. It can be applied to a contemporary embedded processor that maps the physically addressed SPM into a virtual space with the help of an integrated memory management unit (MMU). Based on mass-count disparity, we introduce a hardware memory reference sampling unit (MRSU) that samples the memory reference stream with very low probability. The captured address is considered as one of the memory addresses contained in a frequently referenced memory block. A hardware interruption is generated by the MRSU, and the identified frequently accessed memory block is placed into the SPM space by software. The software also modifies the page table so that the follow-up memory accesses to the memory block will be redirected to the SPM. With no dependence on compiler and profiling information, our proposed strategy is specifically adequate for SPM management in a multi-task environment. In such an environment, a real-time operating system (RTOS) is usually hosted, and the behavior of the memory accesses cannot be predicted by static analysis or profiling. We evaluate our SPM allocation strategy by running several tasks on a tiny RTOS with preemptive scheduling. Experimental results show that our approach can achieve 10% reduction in energy consumption on average, with 1% performance degradation at runtime compared with a cache-only reference system.