Simultaneous reference allocation in code generation for dual data memory bank ASIPs
ACM Transactions on Design Automation of Electronic Systems (TODAES)
Access pattern-based memory and connectivity architecture exploration
ACM Transactions on Embedded Computing Systems (TECS)
Data Memory Organization and Optimizations in Application-Specific Systems
IEEE Design & Test
Improved indexing for cache miss reduction in embedded systems
Proceedings of the 40th annual Design Automation Conference
Register allocation & spilling via graph coloring
SIGPLAN '82 Proceedings of the 1982 SIGPLAN symposium on Compiler construction
Energy Benefits of a Configurable Line Size Cache for Embedded Systems
ISVLSI '03 Proceedings of the IEEE Computer Society Annual Symposium on VLSI (ISVLSI'03)
A highly configurable cache architecture for embedded systems
Proceedings of the 30th annual international symposium on Computer architecture
Automatic Tuning of Two-Level Caches to Embedded Applications
Proceedings of the conference on Design, automation and test in Europe - Volume 1
ARM System Developer's Guide: Designing and Optimizing System Software
ARM System Developer's Guide: Designing and Optimizing System Software
Efficient scratchpad allocation algorithms for energy constrained embedded systems
PACS'03 Proceedings of the Third international conference on Power - Aware Computer Systems
Demosaicing: image reconstruction from color CCD samples
IEEE Transactions on Image Processing
Reducing complexity of multiobjective design space exploration in VLIW-based embedded systems
ACM Transactions on Architecture and Code Optimization (TACO)
Hi-index | 0.00 |
Memory bandwidth issues present a formidable bottleneck to accelerating embedded applications, particularly data bandwidth for multiple-issue VLIW processors. Providing an efficient ASIP data cache solution requires that the cache design be tailored to the target application. Multiple caches or caches with multiple ports allow simultaneous parallel access to data, alleviating the bandwidth problem if data is placed effectively. We present a solution that greatly simplifies the creation of targeted caches and automates the process of explicitly allocating individual memory access to caches and banks. The effectiveness of our solution is demonstrated with experimental results.