Real-Time Rendering
Cg: a system for programming graphics hardware in a C-like language
ACM SIGGRAPH 2003 Papers
Temperature-aware microarchitecture: Modeling and implementation
ACM Transactions on Architecture and Code Optimization (TACO)
Optimization principles and application performance evaluation of a multithreaded GPU using CUDA
Proceedings of the 13th ACM SIGPLAN Symposium on Principles and practice of parallel programming
Coordinated power, energy, and temperature management
Coordinated power, energy, and temperature management
A performance study of general-purpose applications on graphics processors using CUDA
Journal of Parallel and Distributed Computing
Predictive Runtime Code Scheduling for Heterogeneous Architectures
HiPEAC '09 Proceedings of the 4th International Conference on High Performance Embedded Architectures and Compilers
Hi-index | 0.00 |
In recent computing systems, CPUs have encountered the situations in which they cannot meet the increasing throughput demands. To overcome the limits of CPUs in processing heavy tasks, especially for computer graphics, GPUs have been widely used. Therefore, the performance of up-to-date computing systems can be maximized when the task scheduling between the CPU and the GPU is optimized. In this paper, we analyze the system in the perspective of performance, energy efficiency, and temperature according to the execution methods between the CPU and the GPU. Experimental results show that the GPU leads to better efficiency compared to the CPU when single application is executed. However, when two applications are executed, the GPU does not guarantee superior efficiency than the CPU depending on the application characteristics.