Compilers: principles, techniques, and tools
Compilers: principles, techniques, and tools
The impact of interprocedural analysis and optimization in the Rn programming environment
ACM Transactions on Programming Languages and Systems (TOPLAS)
An extended set of FORTRAN basic linear algebra subprograms
ACM Transactions on Mathematical Software (TOMS)
ACM Transactions on Mathematical Software (TOMS)
ACM Transactions on Mathematical Software (TOMS)
Direct methods for sparse matrices
Direct methods for sparse matrices
ACM Transactions on Mathematical Software (TOMS)
A set of level 3 basic linear algebra subprograms
ACM Transactions on Mathematical Software (TOMS)
Supercompilers for parallel and vector computers
Supercompilers for parallel and vector computers
Sparse extensions to the FORTRAN Basic Linear Algebra Subprograms
ACM Transactions on Mathematical Software (TOMS)
Algorithm 692: Model implementation and test package for the Sparse Basic Linear Algebra Subprograms
ACM Transactions on Mathematical Software (TOMS)
Introduction to parallel computing: design and analysis of algorithms
Introduction to parallel computing: design and analysis of algorithms
Implementing sparse BLAS primitives on concurrent/vector processors: a case study
Lectures on parallel computation
Advanced compiler optimizations for sparse computations
Journal of Parallel and Distributed Computing
Automatic Data Structure Selection and Transformation for Sparse Matrix Computations
IEEE Transactions on Parallel and Distributed Systems
The use of iteration space partitioning to construct representative simple sections
Journal of Parallel and Distributed Computing
SPARK: a benchmark package for sparse computations
ICS '90 Proceedings of the 4th international conference on Supercomputing
Level 3 basic linear algebra subprograms for sparse matrices: a user-level interface
ACM Transactions on Mathematical Software (TOMS)
Basic Linear Algebra Subprograms for Fortran Usage
ACM Transactions on Mathematical Software (TOMS)
Algorithm 539: Basic Linear Algebra Subprograms for Fortran Usage [F1]
ACM Transactions on Mathematical Software (TOMS)
Loop Transformations for Restructuring Compilers: The Foundations
Loop Transformations for Restructuring Compilers: The Foundations
Computer Solution of Large Sparse Positive Definite
Computer Solution of Large Sparse Positive Definite
Solving Linear Systems on Vector and Shared Memory Computers
Solving Linear Systems on Vector and Shared Memory Computers
On Automatic Data Structure Selection and Code Generation for Sparse Computations
Proceedings of the 6th International Workshop on Languages and Compilers for Parallel Computing
The Construction of Numerical Mathematical Software for the AMT DAP by Program Transformation
CONPAR '92/ VAPP V Proceedings of the Second Joint International Conference on Vector and Parallel Processing: Parallel Processing
CONPAR 94 - VAPP VI Proceedings of the Third Joint International Conference on Vector and Parallel Processing: Parallel Processing
Using AspectJ to separate concerns in parallel scientific Java code
Proceedings of the 3rd international conference on Aspect-oriented software development
Automating the generation of composed linear algebra kernels
Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis
Specifying and verifying sparse matrix codes
Proceedings of the 15th ACM SIGPLAN international conference on Functional programming
Hi-index | 0.00 |
Primitives in mathematical software are usually written and optimized by hand. With the implementation of a “sparse compiler” that is capable of automatically converting a dense program into sparse code, however, a completely different approach to the generation of sparse primitives can be taken. A dense implementation of a particular primitive is supplied to the sparse compiler, after which it can be converted into many different sparse versions of this primitive. Each version is specifically tailored to a class of sparse matrices having a specific nonzero structure. In this article, we discuss some of our experiences with this new approach.