Memory coherence in shared virtual memory systems
ACM Transactions on Computer Systems (TOCS)
Software versus hardware shared-memory implementation: a case study
ISCA '94 Proceedings of the 21st annual international symposium on Computer architecture
Hiding communication latency and coherence overhead in software DSMs
Proceedings of the seventh international conference on Architectural support for programming languages and operating systems
Effectiveness of dynamic prefetching in multiple-writer distributed virtual shared-memory systems
Journal of Parallel and Distributed Computing
Data prefetching for software DSMs
ICS '98 Proceedings of the 12th international conference on Supercomputing
Parallel programming: techniques and applications using networked workstations and parallel computers
Adaptive Prefetching Technique for Shared Virtual Memory
CCGRID '01 Proceedings of the 1st International Symposium on Cluster Computing and the Grid
Efficiently Adapting to Sharing Patterns in Software DSMs
HPCA '98 Proceedings of the 4th International Symposium on High-Performance Computer Architecture
On the design and implementation of a portable DSM system for low-cost multicomputers
ICCSA'03 Proceedings of the 2003 international conference on Computational science and its applications: PartI
Comparing latency-tolerance techniques for software DSM systems
IEEE Transactions on Parallel and Distributed Systems
Hi-index | 0.00 |
This work presents a study regarding the search for an optimal value of the history size for the prediction/prefetching technique history-based prefetching, which collects the recent history of accesses to individual shared memory pages and uses that information to predict the next access to a page. On correct predictions, this technique allows to hide the latency generated by page faults on the remote node when the access is effectively done. Some parameters as the size of the page history structure that is stored and transmitted among nodes can be fine-tuned to improve the prediction efficency. Our experiments show that small values of history size provide a better performance in the tested applications, while bigger values tend to generate more latency when the page history is transmitted, without improving the prediction efficiency.