Cache management for discrete processor architectures
ISPA'05 Proceedings of the Third international conference on Parallel and Distributed Processing and Applications
Hi-index | 0.00 |
Abstract: Uses a trace-driven simulation technique to study the performance impact on the storage hierarchy system in a multithreaded execution environment. Particularly, we examine the effects of different multithread scheduling techniques on cache performance using several program traces representing a typical server/workstation workload mix. An MRU (most recently used) priority scheduling scheme is proposed as the baseline scheduling scheme to study the performance effects. We found that the cache performance can be improved over the traditional round-robin scheduling method when the thread with the MRU hit is given a higher priority. With a direct-map cache, the absolute hit ratio can be improved by 7% more than the original ratio. We also studied the performance effects on cache memory with a varying number of concurrent threads. The results showed that both the cache size and the set associativity need to increase according to the number of threads, in order to maintain a comparable cache performance. The main contribution of this paper is to provide a performance comparison between two simple schemes which are easy to implement with the proposed baseline scheme.