The SPLASH-2 programs: characterization and methodological considerations
ISCA '95 Proceedings of the 22nd annual international symposium on Computer architecture
SPEComp: A New Benchmark Suite for Measuring Parallel Computer Performance
WOMPAT '01 Proceedings of the International Workshop on OpenMP Applications and Tools: OpenMP Shared Memory Parallel Programming
Vector vs. superscalar and VLIW architectures for embedded multimedia benchmarks
Proceedings of the 35th annual ACM/IEEE international symposium on Microarchitecture
Cell broadband engine architecture and its first implementation: a performance view
IBM Journal of Research and Development
Larrabee: a many-core x86 architecture for visual computing
ACM SIGGRAPH 2008 papers
The PARSEC benchmark suite: characterization and architectural implications
Proceedings of the 17th international conference on Parallel architectures and compilation techniques
Implementing sparse matrix-vector multiplication on throughput-oriented processors
Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis
Debunking the 100X GPU vs. CPU myth: an evaluation of throughput computing on CPU and GPU
Proceedings of the 37th annual international symposium on Computer architecture
Proceedings of the 47th Design Automation Conference
On the limits of GPU acceleration
HotPar'10 Proceedings of the 2nd USENIX conference on Hot topics in parallelism
Hi-index | 0.00 |
Motivated by recent papers comparing CPU and GPU performance, this paper explores the questions: Why do we compare microprocessors and by what means should we compare them? We distinguish two distinct perspectives from which to make comparisons: application developers and computer architecture researchers. We survey the distinct concerns of these groups, identifying essential information each group expects when interpreting comparisons. We believe the needs of both groups should be addressed separately, as the goals of application developers are quite different from those of computer architects. Reproducibility of results is widely acknowledged as the foundation of scientific investigation. Accordingly, it is imperative that platform comparisons supply enough detail for others to reproduce and contextualize results. As parallel processing continues to increase in importance, and parallel microprocessor architectures continue to proliferate, the importance of conducting and publishing reproducible microprocessor platform comparisons will also increase. We seek to add our voice to the discussion about how these comparisons should be conducted.