ACM Computing Surveys (CSUR)
Server-based data push architecture for multi-processor environments
Journal of Computer Science and Technology
Hi-index | 0.00 |
Cache memories are commonly used to reduce the performance gap between microprocessor and memory technology. To increase the chances that a cache can provide instructions and data when requested, prefetching can be employed. Prefetching attempts to prime the cache with instructions and data which will be accessed in the near future. The work presented here describes a prefetching algorithm which ties data cache prefetching to branches in the instruction stream. History of the data references is incorporated into a Branch Target Buffer (BTB). Since branch instructions determine which instruction path is followed, data access patterns are also dependent upon branch behavior. Results indicate that combining this strategy with tagged prefetching can significantly improve cache hit ratios. While improving cache hit rates is important, our prefetching policy significantly reduces the overall memory bus traffic.