Power Efficient Processor Architecture and The Cell Processor
HPCA '05 Proceedings of the 11th International Symposium on High-Performance Computer Architecture
Lua 5.1 Reference Manual
ACM SIGGRAPH 2007 courses
Larrabee: a many-core x86 architecture for visual computing
ACM SIGGRAPH 2008 papers
Automatic Dynamic Task Distribution between CPU and GPU for Real-Time Systems
CSE '08 Proceedings of the 2008 11th IEEE International Conference on Computational Science and Engineering
A game loop architecture for the GPU used as a math coprocessor in real-time applications
Computers in Entertainment (CIE) - SPECIAL ISSUE: Media Arts
A new physics engine with automatic process distribution between CPU-GPU
Sandbox '08 Proceedings of the 2008 ACM SIGGRAPH symposium on Video games
Gpu gems 3
Processing data streams with hard real-time constraints on heterogeneous systems
Proceedings of the international conference on Supercomputing
Two-Way Real Time Fluid Simulation Using a Heterogeneous Multicore CPU and GPU Architecture
PADS '11 Proceedings of the 2011 IEEE Workshop on Principles of Advanced and Distributed Simulation
Scheduling processing of real-time data streams on heterogeneous multi-GPU systems
Proceedings of the 5th Annual International Systems and Storage Conference
Arbiter work stealing for parallelizing games on heterogeneous computing environments
Proceedings of the High Performance Computing Symposium
Hi-index | 0.00 |
This article presents a new architecture to implement all game loop models for games and real-time applications that use the GPU as a mathematics and physics coprocessor, working in parallel processing mode with the CPU. The presented model applies automatic task distribution concepts. The architecture can apply a set of heuristics defined in Lua scripts in order to get acquainted with the best processor for handling a given task. The model applies the GPGPU (general-purpose computation on GPUs) paradigm. In this article we propose an architecture that acquires knowledge about the hardware by running tasks in each processor and, by studying their performance over time, finding the best processor for a group of tasks.