ARCS'13 Proceedings of the 26th international conference on Architecture of Computing Systems
Tuning the continual flow pipeline architecture
Proceedings of the 27th international ACM conference on International conference on supercomputing
Virtual register renaming: energy efficient substrate for continual flow pipelines
Proceedings of the 23rd ACM international conference on Great lakes symposium on VLSI
Tuning the continual flow pipeline architecture with virtual register renaming
ACM Transactions on Architecture and Code Optimization (TACO)
Hi-index | 0.00 |
Since the introduction of the first industrial out-of-order superscalar processors in the 1990s, instruction buffers and cache sizes have kept increasing with every new generation of out-of-order cores. The motivation behind this continuous evolution has been performance of single-thread applications. Performance gains from larger instruction buffers and caches come at the expense of area, power, and complexity. We show that this is not the most energy efficient way to achieve performance. Instead, sizing the instruction buffers to the minimum size necessary for the common case of L1 data cache hits and using new latency-tolerant microarchitecture to handle loads that miss the L1 data cache, improves execution time and energy consumption on SpecCPU 2000 benchmarks by an average of 10% and 12% respectively, compared to a large superscalar baseline. Our non-blocking architecture outperforms other latency tolerant architectures, such as Continual Flow Pipelines, by up to 15% on the same SpecCPU 2000 benchmarks.