On-chip Vector Coprocessor Sharing for Multicores

  • Authors:
  • Spiridon F. Beldianu;Sotirios G. Ziavras

  • Affiliations:
  • -;-

  • Venue:
  • PDP '11 Proceedings of the 2011 19th International Euromicro Conference on Parallel, Distributed and Network-Based Processing
  • Year:
  • 2011

Quantified Score

Hi-index 0.01

Visualization

Abstract

For most of the applications that make use of a vector coprocessor, the resources are not highly utilized due to the lack of sustained data parallelism, which sometimes occurs due to vector-length changes in dynamic environments. The motivation of our work stems from (a) the mandate for multicore designs to make efficient use of the on-chip resources, (b) the frequent presence of vector operations in high-performance scientific and embedded applications, (c) the increased probability that different cores may deal with different vector lengths at various times, and (d) different vector kernels in the same or different application suites may have diverse computation needs. Our objective is to provide a versatile design framework that can facilitate vector coprocessor sharing among multiple cores in a manner that maximizes resource utilization while also yielding very high performance at reduced cost. We propose three basic shared vector coprocessor architectures for multicores based on coarse-grain, fine-grain and vector lane sharing. We benchmark these distinct vector architectures for a dual-core system using the floating-point performance and resource utilization metrics. Our analysis shows that vector lane sharing, where the number of vector lanes assigned to a core can be controlled dynamically, provides the greatest flexibility and generally yields very good results. Since, however, each of the three design choices has its own performance advantages under certain vector-load conditions, we ultimately suggest a hybrid vector coprocessor design that can support all three architectural choices as per the core and application collective needs.