Static scheduling of synchronous data flow programs for digital signal processing
IEEE Transactions on Computers
Software Synthesis from Dataflow Graphs
Software Synthesis from Dataflow Graphs
StreamIt: A Language for Streaming Applications
CC '02 Proceedings of the 11th International Conference on Compiler Construction
Machine learning for high-speed corner detection
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
Decoupling algorithms from schedules for easy optimization of image processing pipelines
ACM Transactions on Graphics (TOG) - SIGGRAPH 2012 Conference Proceedings
Proceedings of the 49th Annual Design Automation Conference
Optimal 2D Data Partitioning for DMA Transfers on MPSoCs
DSD '12 Proceedings of the 2012 15th Euromicro Conference on Digital System Design
Hi-index | 0.00 |
Explicitly managed memory many-cores (EMM) have been a part of the industrial landscape for the last decade. The IBM CELL processor, general-purpose graphics processing units (GP-GPU) and the STHORM embedded many-core of STMicroelectronics are representative examples. This class of architecture is expected to scale well and to deliver good performance per watt and per mm2 of silicon. As such, it is appealing for application problems with regular data access patterns. However, this moves significant complexity to the programmer who must master parallelization and data movement. High level programming tools are therefore essential in order to allow the effective programming of EMM many-cores to a wide class of programmers. This paper presents a novel approach designed for simplifying the programming of EMM many-core architectures. It initially addresses the image processing application domain and has been targeted to the STHORM platform. It takes a high-level description of the computation kernel algorithm and generates an OpenCL kernel optimized for the target architecture, while managing the parallelization and data movements across the hierarchy in a transparent fashion. The goal is to provide both high productivity and high performance without requiring parallel computing expertise from the programmer, nor the need for application code specialization for the target architecture.