Instruction-level simulation of a cluster at scale
Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis
Cache injection for parallel applications
Proceedings of the 20th international symposium on High performance distributed computing
Characterizing the impact of end-system affinities on the end-to-end performance of high-speed flows
NDM '13 Proceedings of the Third International Workshop on Network-Aware Data Management
Hi-index | 0.00 |
Cache injection addresses the continuing disparity be- tween processor and memory speeds by placing data into a processor's cache directly from the I/O bus. This disparity adversely affects the performance of memory bound appli- cations including certain scientific computations, encryp- tion, image processing, and some graphics applications. Cache injection can reduce memory latency and memory pressure for I/O. The performance of cache injection is de- pendent on several factors including timely usage of data, the amount of data, and the application's data usage pat- terns. We show that cache injection provides significant ad- vantages over data prefetching by reducing the pressure on the memory controller by up to 96%. Despite its benefits, cache injection may degrade application performance due to early injection of data. To overcome this limitation, we propose injection policies to determine when and where to inject data. These policies are based on OS, compiler, and application information.