Computation of matrix chain products. Part II
SIAM Journal on Computing
Efficient parallel solution of linear systems
STOC '85 Proceedings of the seventeenth annual ACM symposium on Theory of computing
Complexity of parallel matrix computations
Theoretical Computer Science
Matrix multiplication via arithmetic progressions
Journal of Symbolic Computation - Special issue on computational algebraic complexity
Pipelined communications in optically interconnected arrays
Journal of Parallel and Distributed Computing
Introduction to parallel algorithms and architectures: array, trees, hypercubes
Introduction to parallel algorithms and architectures: array, trees, hypercubes
Parallel Algorithms for Image Processing on OMC
IEEE Transactions on Computers
Efficient optical communication in parallel computers
SPAA '92 Proceedings of the fourth annual ACM symposium on Parallel algorithms and architectures
Wavelength Division Multiple Access Channel Hypercube Processor Interconnection
IEEE Transactions on Computers
Reducing the symmetric matrix eigenvalue problem to matrix multiplications
SIAM Journal on Scientific Computing
Polynomial and matrix computations (vol. 1): fundamental algorithms
Polynomial and matrix computations (vol. 1): fundamental algorithms
A chained-matrices approach for parallel computation of continued fractions and its applications
Journal of Scientific Computing
Doubly Logarithmic Communication Algorithms for Optical-Communication Parallel Computers
SIAM Journal on Computing
IEEE Transactions on Parallel and Distributed Systems
Linear array with a reconfigurable pipelined bus system—concepts and applications
Information Sciences: an International Journal - special issue on parallel and distributed processing
Parallel Matrix Multiplication on a Linear Array with a Reconfigurable Pipelined Bus System
IEEE Transactions on Computers
Time-Division Optical Communications in Multiprocessor Arrays
IEEE Transactions on Computers
Processor Allocation and Task Scheduling of Matrix Chain Products on Parallel Systems
IEEE Transactions on Parallel and Distributed Systems
The Journal of Supercomputing
Hi-index | 0.00 |
Given N matrices A_{1}, A_{2}, \ldots, A_{N} of size N \times N, the matrix chain product problem is to compute A_{1} \times A_{2} \times \cdots \times A_{N}. Given an N \times N matrix A, the matrix powers problem is to calculate the first N powers of A, that is, A, A^{2}, A^{3}, \ldots, A^{N}. We solve the two problems on distributed memory systems (DMSs) with p processors that can support one-to-one communications in T(p) time. Assume that the fastest sequential matrix multiplication algorithm has time complexity O(N^{\alpha}), where the currently best value of \alpha is less than 2.3755. Let p be arbitrarily chosen in the range 1 \leq p \leq N^{\alpha + 1}/(\log N)^{2}. We show that the two problems can be solved by a DMS with p processors in T_{\rm chain}(N,p) = O({\frac{N^{\alpha + 1}}{p}} + T(p)(({\frac{N^{2(1 + 1/\alpha)}}{p^{2/\alpha}}})(\log^{+}{\frac{p}{N}})^{1 - 2/\alpha} + \log^{+}({\frac{p\log N}{N^{\alpha}}})\log N)) and T_{\rm power}(N,p) = O({\frac{N^{\alpha + 1}}{p}} + T(p)(({\frac{N^{2(1 + 1/\alpha)}}{p^{2/\alpha}}})(\log^{+}{\frac{p}{2\log N}})^{1 - 2/\alpha}+ (\log N)^{2})) times, respectively, where the function \log^{+} is defined as follows: \log^{+}x = \log x if x \geq 1 and \log^{+}x = 1 if 0