Alpha architecture reference manual
Alpha architecture reference manual
Stride directed prefetching in scalar processors
MICRO 25 Proceedings of the 25th annual international symposium on Microarchitecture
Data prefetching for high-performance processors
Data prefetching for high-performance processors
Hardware implementation issues of data prefetching
ICS '95 Proceedings of the 9th international conference on Supercomputing
ACM Computing Surveys (CSUR)
Sequential Hardware Prefetching in Shared-Memory Multiprocessors
IEEE Transactions on Parallel and Distributed Systems
Decoupled access/execute computer architectures
ISCA '82 Proceedings of the 9th annual symposium on Computer Architecture
Tolerating Latency Through Software-Controlled Data Prefetching
Tolerating Latency Through Software-Controlled Data Prefetching
Software methods for improvement of cache performance on supercomputer applications
Software methods for improvement of cache performance on supercomputer applications
Hi-index | 0.00 |
Prefetching brings data into the cache before the processor expects it, thereby eliminating potential cache misses. There are two major prefetching schemes. In a software scheme, the compiler predicts memory access patterns and places prefetch instructions in the code. In a hardware scheme, hardware predicts memory access patterns at runtime and brings data into the cache before the processor requires it. This paper proposes a hardware scheme for prefetching, where a second processor is used for prefetching data for the primary processor. The scheme does not predict memory access patterns, but rather uses the second processor to run ahead of the primary processor so as to detect future memory accesses and prefetch these references.