Reducing the bandwidth of sparse symmetric matrices
ACM '69 Proceedings of the 1969 24th national conference
An Efficient Parallel Algorithm for the Solution of Large Sparse Linear Matrix Equations
IEEE Transactions on Computers
Wavefront Array Processor: Language, Architecture, and Applications
IEEE Transactions on Computers
Hi-index | 14.98 |
The general question addressed in this study is: are regular networks suitable for sparse matrix computations? More specifically, we consider a special purpose self-timed computational array that is designed for a specific dense matrix computation. We add to each cell in the network the capability of recognizing and skipping operations that involve zero operands, and then ask how efficient is this resulting network for sparse matrix computation? In order to answer this question, it is necessary to study the effect of data interlock on the performance of self-timed networks. For this, the class of pseudosystolic networks is introduced as a hybrid class between systolic and self-timed networks. Networks in this class are easy to analyze, and provide a means for the study of the worst case performance of self-timed networks. The well known concept of computation fronts is also generalized to include irregular flow of data, and a technique based on the propagation of such computation fronts is suggested for the estimation of the processing time and the communication time of pseudosystolic networks.