Mathematica: a system for doing mathematics by computer
Mathematica: a system for doing mathematics by computer
Algebra of programming
Fork Algebras in Algebra, Logic and Computer Science
Fork Algebras in Algebra, Logic and Computer Science
Adaptive Strassen's matrix multiplication
Proceedings of the 21st annual international conference on Supercomputing
Anatomy of high-performance matrix multiplication
ACM Transactions on Mathematical Software (TOMS)
Transforming Data by Calculation
Generative and Transformational Techniques in Software Engineering II
Library generation for linear transforms
Library generation for linear transforms
Operator Language: A Program Generation Framework for Fast Kernels
DSL '09 Proceedings of the IFIP TC 2 Working Conference on Domain-Specific Languages
Monads need not be endofunctors
FOSSACS'10 Proceedings of the 13th international conference on Foundations of Software Science and Computational Structures
Towards linear algebras of components
FACS'10 Proceedings of the 7th international conference on Formal Aspects of Component Software
Typed linear algebra for weigthed (probabilistic) automata
CIAA'12 Proceedings of the 17th international conference on Implementation and Application of Automata
Typing linear algebra: A biproduct-oriented approach
Science of Computer Programming
Algebraic program semantics for supercomputing
Theories of Programming and Formal Methods
Hi-index | 0.00 |
Motivated by the need to formalize generation of fast running code for linear algebra applications, we show how an index-free, calculational approach to matrix algebra can be developed by regarding matrices as morphisms of a category with biproducts. This shifts the traditional view of matrices as indexed structures to a type-level perspective analogous to that of the pointfree algebra of programming. The derivation of fusion, cancellation and abide laws from the biproduct equations makes it easy to calculate algorithms implementing matrix multiplication, the kernel operation of matrix algebra, ranging from its divide-and-conquer version to the conventional, iterative one. From errant attempts to learn how particular products and coproducts emerge from biproducts, we not only rediscovered block-wise matrix combinators but also found a way of addressing other operations calculationally such as e.g. Gaussian elimination. A strategy for addressing vectorization along the same lines is also given.