Parallelizing CAD: a timely research agenda for EDA
Proceedings of the 45th annual Design Automation Conference
Event-driven gate-level simulation with GP-GPUs
Proceedings of the 46th Annual Design Automation Conference
Introduction to GPU programming for EDA
Proceedings of the 2009 International Conference on Computer-Aided Design
Parallel multi-level analytical global placement on graphics processing units
Proceedings of the 2009 International Conference on Computer-Aided Design
On the Robust Mapping of Dynamic Programming onto a Graphics Processing Unit
ICPADS '09 Proceedings of the 2009 15th International Conference on Parallel and Distributed Systems
Parallelizing Simulated Annealing-Based Placement Using GPGPU
FPL '10 Proceedings of the 2010 International Conference on Field Programmable Logic and Applications
Parallelizing FPGA Technology Mapping Using Graphics Processing Units (GPUs)
FPL '10 Proceedings of the 2010 International Conference on Field Programmable Logic and Applications
Fast thermal analysis on GPU for 3D-ICs with integrated microchannel cooling
Proceedings of the International Conference on Computer-Aided Design
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
Graphics Processing Units and Open Computing Language for parallel computing
Computers and Electrical Engineering
Hi-index | 0.00 |
Graphical processing unit (GPU) computing has been an interesting area of research in the last few years. While initial adapters of the technology have been from image processing domain due to difficulties in programming the GPUs, research on programming languages made it possible for people without the knowledge of low-level programming languages such as OpenGL develop code on GPUs. Two main GPU architectures from AMD (former ATI) and NVIDIA acquired grounds. AMD adapted Stanford's Brook language and made it into an architecture-agnostic programming model. NVIDIA, on the other hand, brought CUDA framework to a wide audience. While the two languages have their pros and cons, such as Brook not being able to scale as well and CUDA having to account for architectural-level decisions, it has not been possible to compile one code on another architecture or across platforms. Another opportunity came with the introduction of the idea of combining one or more CPUs and GPUs on the same die. Eliminating some of the interconnection bandwidth issues, this combination makes it possible to offload tasks with high parallelism to the GPU. The technological direction towards multicores for CPU-only architectures also require a programming methodology change and act as a catalyst for suitable programming languages. Hence, a unified language that can be used both on multiple core CPUs as well as GPUs and their combinations has gained interest. Open Computing Language (OpenCL), developed originally by the Khronos Group of Apple and supported by both AMD and NVIDIA, is seen as the programming language of choice for parallel programming. In this paper, we provide a motivation for our tutorial talk on usage of OpenCL for GPUs and highlight key features of the language. We provide research directions on OpenCL for EDA. In our tutorial talk, we use EDA as our application domain to get the readers started with programming the rising language of parallelism, OpenCL.