Bounding Cache-Related Preemption Delay for Real-Time Systems

  • Authors:
  • Chang-Gun Lee;Kwangpo Lee;Joosun Hahn;Yang-Min Seo;Sang Lyul Min;Rhan Ha;Seongsoo Hong;Chang Yun Park;Minsuk Lee;Chong Sang Kim

  • Affiliations:
  • Univ. of Illinois at Urbana-Champaign, Urbana;PalmPalm Technology Inc., Seoul, Korea;Seoul National Univ., Seoul, Korea;Seoul National Univ., Seoul, Korea;Seoul National Univ., Seoul, Korea;Hong-Ik Univ., Seoul, Korea;Seoul National Univ., Seoul, Korea;Chung-Ang Univ, Seoul, Korea;Han-Sung Univ., Seoul, Korea;Seoul National Univ., Seoul, Korea

  • Venue:
  • IEEE Transactions on Software Engineering
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

Cache memory is used in almost all computer systems today to bridge the ever increasing speed gap between the processor and main memory. However, its use in multitasking computer systems introduces additional preemption delay due to the reloading of memory blocks that are replaced during preemption. This cache-related preemption delay poses a serious problem in real-time computing systems where predictability is of utmost importance. In this paper, we propose an enhanced technique for analyzing and thus bounding the cache-related preemption delay in fixed-priority preemptive scheduling focusing on instruction caching. The proposed technique improves upon previous techniques in two important ways. First, the technique takes into account the relationship between a preempted task and the set of tasks that execute during the preemption when calculating the cache-related preemption delay. Second, the technique considers the phasing of tasks to eliminate many infeasible task interactions. These two features are expressed as constraints of a linear programming problem whose solution gives a guaranteed upper bound on the cache-related preemption delay. This paper also compares the proposed technique with previous techniques using randomly generated task sets. The results show that the improvement on the worst-case response time prediction by the proposed technique over previous techniques ranges between 5 percent and 18 percent depending on the cache refill time when the task set utilization is 0.6. The results also show that as the cache refill time increases, the improvement increases, which indicates that accurate prediction of cache-related preemption delay by the proposed technique becomes increasingly important if the current trend of widening speed gap between the processor and main memory continues.