Scalable and reliable communication for hardware transactional memory
Proceedings of the 17th international conference on Parallel architectures and compilation techniques
Token tenure: PATCHing token counting using directory-based cache coherence
Proceedings of the 41st annual IEEE/ACM International Symposium on Microarchitecture
A Controlled Scheduling Algorithm Decreasing the Incidence of Starvation in Grid Environments
AICI '09 Proceedings of the International Conference on Artificial Intelligence and Computational Intelligence
Token tenure and PATCH: A predictive/adaptive token-counting hybrid
ACM Transactions on Architecture and Code Optimization (TACO)
Switch-based packing technique to reduce traffic and latency in token coherence
Journal of Parallel and Distributed Computing
Improving coherence protocol reactiveness by trading bandwidth for latency
Proceedings of the 9th conference on Computing Frontiers
APCR: an adaptive physical channel regulator for on-chip interconnects
Proceedings of the 21st international conference on Parallel architectures and compilation techniques
Hi-index | 0.00 |
Shared-memory multiprocessors are becoming to be formed by an increasingly larger number of nodes. In these systems, implementing cache coherence is a key issue. Token Coherence is a low latency cache coherence protocol that avoids indirection for cache-to-cache misses and which does not require a totally-ordered interconnect. When races are rare, the protocol performs well thanks to the performance policy. Unfortunately, some medium/large systems and some applications that often access the same data simultaneously make races more common. As a result, the protocol does not perform as well as it could because it uses the persistent request mechanism to prevent starvation. This mechanism is too slow and inflexible because it overrides the performance policy. In consequence, the protocol slows down the system and does not take advantage of the flexibility and speed of the common case. We propose a new mechanism, namely priority requests, which replaces the persistent request one. Our mechanism solves races, while still respecting the performance policy, simply by ordering and giving a higher priority to requests suffering from starvation. Thus, our mechanism handles the tokens more efficiently and reduces the network traffic.