Compilers: principles, techniques, and tools
Compilers: principles, techniques, and tools
An extendible approach for analyzing fixed priority hard real-time tasks
Real-Time Systems
Analysis of Cache-Related Preemption Delay in Fixed-Priority Preemptive Scheduling
IEEE Transactions on Computers
Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment
Journal of the ACM (JACM)
An Accurate Worst Case Timing Analysis for RISC Processors
IEEE Transactions on Software Engineering
Adding instruction cache effect to schedulability analysis of preemptive real-time systems
RTAS '96 Proceedings of the 2nd IEEE Real-Time Technology and Applications Symposium (RTAS '96)
Efficient microarchitecture modeling and path analysis for real-time software
RTSS '95 Proceedings of the 16th IEEE Real-Time Systems Symposium
Integrating the timing analysis of pipelining and instruction caching
RTSS '95 Proceedings of the 16th IEEE Real-Time Systems Symposium
Enhanced analysis of cache-related preemption delay in fixed-priority preemptive scheduling
RTSS '97 Proceedings of the 18th IEEE Real-Time Systems Symposium
Cache memory management in real-time systems
Cache memory management in real-time systems
Preemption-aware dynamic voltage scaling in hard real-time systems
Proceedings of the 2004 international symposium on Low power electronics and design
Evaluation of priority based real time scheduling algorithms: choices and tradeoffs
Proceedings of the 2008 ACM symposium on Applied computing
Leakage-Aware Energy Efficient Scheduling for Fixed-Priority Tasks with Preemption Thresholds
ADMA '08 Proceedings of the 4th international conference on Advanced Data Mining and Applications
Variants of priority scheduling algorithms for reducing context-switches in real-time systems
ICDCN'06 Proceedings of the 8th international conference on Distributed Computing and Networking
Hi-index | 0.00 |
In multi-tasking real-time systems,inter-task cache interference due to preemptions degrades schedulabilityas well as performance. To address this problem, we propose anovel scheduling scheme, called limited preemptive scheduling(LPS), that limits preemptions to execution points with smallcache-related preemption costs. Limiting preemptions decreasesthe cache-related preemption costs of tasks but increases blockingdelay of higher priority tasks. The proposed scheme makes anoptimal trade-off between these two factors to maximize the schedulabilityof a given task set while minimizing cache-related preemptiondelay of tasks. Experimental results show that the LPS schemeimproves the maximum schedulable utilization by up to 40\% comparedwith the traditional fully preemptive scheduling (FPS) scheme.The results also show that up to 20\% of processor time is savedby the LPS scheme due to reduction in the cache-related preemptioncosts. Finally, the results show that both the improvement ofschedulability and the saving of processor time by the LPS schemeincrease as the speed gap between the processor and main memorywidens.