Cache performance of operating system and multiprogramming workloads
ACM Transactions on Computer Systems (TOCS)
Proceedings of the 27th annual international symposium on Computer architecture
Going the distance for TLB prefetching: an application-driven study
ISCA '02 Proceedings of the 29th annual international symposium on Computer architecture
Compiler-directed code restructuring for reducing data TLB energy
Proceedings of the 2nd IEEE/ACM/IFIP international conference on Hardware/software codesign and system synthesis
Worst case timing analysis of input dependent data cache behavior
ECRTS '06 Proceedings of the 18th Euromicro Conference on Real-Time Systems
Predictable Paging in Real-Time Systems: A Compiler Approach
ECRTS '07 Proceedings of the 19th Euromicro Conference on Real-Time Systems
Memory Systems: Cache, DRAM, Disk
Memory Systems: Cache, DRAM, Disk
Hi-index | 0.00 |
Rapid system responsiveness and execution time predictability are of significant importance for a large class of real-time embedded systems. Multi-tasking leads to interference in the shared processor resources such as caches and TLBs, which in turn results in not only deteriorated performance but also, and for some applications even more importantly, highly suboptimal worst-case execution time (WCET) estimates due to the interference unpredictability. We present a methodology for task-aware D-TLB interference reduction and preloading through an application-specific task's state introspection at context-switch time for embedded multitasking. The proposed technique addresses the problem through a synergistic cooperation between the compiler, for an application-specific analysis of the task's context, and the OS, for a run-time introspection of the context and an efficient identification of TLB entries of current (live) and of "near-future" usage.