Lazy cache invalidation for self-modifying codes

  • Authors:
  • Anthony Gutierrez;Joseph Pusdesris;Ronald G. Dreslinski;Trevor Mudge

  • Affiliations:
  • University of Michigan, Ann Arbor, Michigan, USA;University of Michigan, Ann Arbor, Michigan, USA;University of Michigan, Ann Arbor, Michigan, USA;University of Michigan, Ann Arbor, Michigan, USA

  • Venue:
  • Proceedings of the 2012 international conference on Compilers, architectures and synthesis for embedded systems
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Just-in-time compilation with dynamic code optimization is often used to help improve the performance of applications that utilize high-level languages and virtual run-time environments, such as those found in smartphones. Just-in-time compilation introduces additional overhead into the instruction fetch stage of a processor that is particularly problematic for user applications-instruction cache invalidation due to the use of self-modifying code. This software-assisted cache coherence serializes cache line invalidations, or causes a costly invalidation of the entire instruction cache, and prevents useful instructions from being fetched for the period during which the stale instructions are being invalidated. This overhead is not acceptable for user applications, which are expected to respond quickly. In this work we introduce a new technique that can, using a single instruction, invalidate cache lines in page-sized chunks as opposed to invalidating only a single line at a time. Lazy cache invalidation reduces the amount of time spent stalling due to instruction cache invalidation by removing stale instructions on demand as they are accessed, as opposed to all at once. The key observation behind lazy cache invalidation is that stale instructions do not necessarily need to be removed from the instruction cache; as long as it is guaranteed that attempts to fetch stale instructions will not hit in the instruction cache, the program will behave as the developer had intended.