The connection machine
Advanced compiler optimizations for supercomputers
Communications of the ACM - Special issue on parallelism
An overview of High Performance Fortran
ACM SIGPLAN Fortran Forum
CHARM++: a portable concurrent object oriented system based on C++
OOPSLA '93 Proceedings of the eighth annual conference on Object-oriented programming systems, languages, and applications
Cilk: an efficient multithreaded runtime system
PPOPP '95 Proceedings of the fifth ACM SIGPLAN symposium on Principles and practice of parallel programming
FLAME: Formal Linear Algebra Methods Environment
ACM Transactions on Mathematical Software (TOMS)
The ILLIAC IV FORTRAN compiler
Proceedings of the conference on Programming languages and compilers for parallel and vector machines
NESL: A Nested Data-Parallel Language
NESL: A Nested Data-Parallel Language
High Performance Fortran: Language Specification (PART II)
ACM SIGPLAN Fortran Forum - Special issue: high performance Fortran language specification, part 2
LISP 1.5 Programmer's Manual
Common Lisp: The Language
A programming language
GPGPU: general purpose computation on graphics hardware
ACM SIGGRAPH 2004 Course Notes
Programming for parallelism and locality with hierarchically tiled arrays
Proceedings of the eleventh ACM SIGPLAN symposium on Principles and practice of parallel programming
IEEE Transactions on Computers
MapReduce: simplified data processing on large clusters
Communications of the ACM - 50th anniversary issue: 1958 - 2008
Proceedings of the 13th ACM SIGPLAN Symposium on Principles and practice of parallel programming
Intel threading building blocks
Intel threading building blocks
The TI ASC: a highly modular and flexible super computer architecture
AFIPS '72 (Fall, part I) Proceedings of the December 5-7, 1972, fall joint computer conference, part I
The Combinatorial BLAS: design, implementation, and applications
International Journal of High Performance Computing Applications
PACUE: processor allocator considering user experience
Euro-Par'11 Proceedings of the 2011 international conference on Parallel Processing - Volume 2
Hi-index | 0.00 |
Developing applications is becoming increasingly difficult due to recent growth in machine complexity along many dimensions, especially that of parallelism. We are studying data types that can be used to represent data parallel operations. Developing parallel programs with these data types have numerous advantages and such a strategy should facilitate parallel programming and enable portability across machine classes and machine generations without significant performance degradation. In this paper, we discuss our vision of data parallel programming with powerful abstractions. We first discuss earlier work on data parallel programming and list some of its limitations. Then, we introduce several dimensions along which is possible to develop more powerful data parallel programming abstractions. Finally, we present two simple examples of data parallel programs that make use of operators developed as part of our studies.