Selective cache ways: on-demand cache resource allocation
Proceedings of the 32nd annual ACM/IEEE international symposium on Microarchitecture
A low power unified cache architecture providing power and performance flexibility (poster session)
ISLPED '00 Proceedings of the 2000 international symposium on Low power electronics and design
Proceedings of the 33rd annual ACM/IEEE international symposium on Microarchitecture
Automatic Tuning of Two-Level Caches to Embedded Applications
Proceedings of the conference on Design, automation and test in Europe - Volume 1
A self-tuning cache architecture for embedded systems
ACM Transactions on Embedded Computing Systems (TECS)
A highly configurable cache for low energy embedded systems
ACM Transactions on Embedded Computing Systems (TECS)
Multifacet's general execution-driven multiprocessor simulator (GEMS) toolset
ACM SIGARCH Computer Architecture News - Special issue: dasCMP'05
A self-tuning configurable cache
Proceedings of the 44th annual Design Automation Conference
Program Phase Directed Dynamic Cache Way Reconfiguration for Power Efficiency
ASP-DAC '07 Proceedings of the 2007 Asia and South Pacific Design Automation Conference
The PARSEC benchmark suite: characterization and architectural implications
Proceedings of the 17th international conference on Parallel architectures and compilation techniques
Proceedings of the 36th annual international symposium on Computer architecture
Proceedings of the 48th Design Automation Conference
Dynamic Cache Reconfiguration for Soft Real-Time Systems
ACM Transactions on Embedded Computing Systems (TECS)
Cost-efficient buffer sizing in shared-memory 3D-MPSoCs using wide I/O interfaces
Proceedings of the 49th Annual Design Automation Conference
Courteous cache sharing: being nice to others in capacity management
Proceedings of the 49th Annual Design Automation Conference
Dynamically reconfigurable hybrid cache: an energy-efficient last-level cache design
DATE '12 Proceedings of the Conference on Design, Automation and Test in Europe
Hi-index | 0.00 |
To alleviate high energy dissipation of cache memory, some research has proposed to reconfigure cache parameters such as cache capacity, number of way associative, and cache line size during program phase changes. However, none of previous research on cache reconfiguration takes thread criticality into consideration. In this paper, we dynamically predict thread criticality of a parallel application and tune our cache memory architecture accordingly. The experimental results show that our method not only reduces 42% energy consumption, but also improves the system performance by 4% compared to the baseline cache setting without reconfiguration. Compared with the work by Chen et al. [1] where cache capacity is configured based on its hit count, our method yields extra 16% energy reduction and 7% performance improvement. Compared with the work by Gordon-Ross et al. [2] where cache always select the configuration with the minimum energy consumption for the current interval, our result has 8% more energy reduction and 12% more performance improvement.