Design and evaluation of a compiler algorithm for prefetching
ASPLOS V Proceedings of the fifth international conference on Architectural support for programming languages and operating systems
The case for a single-chip multiprocessor
Proceedings of the seventh international conference on Architectural support for programming languages and operating systems
A Chip-Multiprocessor Architecture with Speculative Multithreading
IEEE Transactions on Computers
Lx: a technology platform for customizable VLIW embedded processing
Proceedings of the 27th annual international symposium on Computer architecture
Dynamic management of scratch-pad memory space
Proceedings of the 38th annual Design Automation Conference
Automatic generation of embedded memory wrapper for multiprocessor SoC
Proceedings of the 39th annual Design Automation Conference
Compiler-directed scratch pad memory hierarchy design and management
Proceedings of the 39th annual Design Automation Conference
Custom Memory Management Methodology: Exploration of Memory Organisation for Embedded Multimedia System Design
Assigning Program and Data Objects to Scratchpad for Energy Reduction
Proceedings of the conference on Design, automation and test in Europe
Hardware-Assisted Data Compression for Energy Minimization in Systems with Embedded Processors
Proceedings of the conference on Design, automation and test in Europe
Using data compression for increasing memory system utilization
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
Hi-index | 0.00 |
Multiprocessor-System-on-a-Chip (MPSoC) performance and power consumption are greatly affected by the application data access characteristics. While the way the application is written is critical in shaping the data access pattern, the compiler optimizations employed can also make a significant difference. Considering that cost of off-chip memory accesses is continuously rising (in terms of CPU cycles), minimizing the number and volume of off-chip data traffic in MPSoCs can be very important. This paper addresses this problem by proposing data compression for increasing the effective on-chip storage space in an MPSoC-based environment. A critical issue is to schedule compressions and decompressions intelligently such that they do not conflict with application execution. In particular, one needs to decide which processors should participate in the compression (and decompression) activity at any given point during the course of execution. We propose both "static" and "dynamic" algorithms for this purpose. In the static scheme, the processors are divided into two groups (those performing compression/decompression and those executing the application), and this grouping is maintained throughout the execution of the application. In the dynamic scheme, on the other hand, the execution starts with some grouping but this grouping can change during the course of execution, depending on the dynamic variations in the data access pattern.