Matrix bidiagonalization: implementation and evaluation on the Trident processor
Neural, Parallel & Scientific Computations
Hi-index | 0.00 |
Within the current decade, process technology is promising more than one billion transistors on a single die, operating at frequency more than 10 GHz. We proposed the Trident processor, which uses multi-level ISA to express data parallelism to hardware. Trident is scalable because its architecture is regular, which can be widely replicated to efficiently harness the available transistor budget. Besides, it is based on local communication, which is very suitable for a high operating frequency of the future VLSI technology. This paper discusses the Trident processor architecture and evaluates its performance on the Basic Linear Algebra Subprograms (BLAS), which are widely used in many data parallel applications. The TFLOPS rate on infinite-size problems (R8), which is primarily a characteristic of the computer technology, and the problem size needed to reach one-half of R8 (N1/2), which is a measure of the amount of parallelism in a computer architecture, are used to evaluate the performance of the Trident processor on BLAS. On 128 parallel Trident lanes and 10 GHz operating frequency, which are possible in the billion-transistor era, R8 of dot-product,matrix-vector, and matrix-matrix multiplications are 1.1, 1.8, and 2.5 TFLOPS, respectively. Besides, N1/2 increases when switching from low level to high level of BLAS.