PMCNOC: A Pipelining Multi-channel Central Caching Network-on-chip Communication Architecture Design

  • Authors:
  • N. Wang;A. Sanusi;P. Y. Zhao;M. Elgamel;M. A. Bayoumi

  • Affiliations:
  • Department of Electrical and Computer Engineering, WVU Institute of Technology, Montgomery, USA 25136;The Center for Advanced Computer Studies, University of Louisiana at Lafayette, Lafayette, USA 70503;Department of Mathematics and Computer Science, Chapman University, Orange, USA 92866;The Center for Advanced Computer Studies, University of Louisiana at Lafayette, Lafayette, USA 70503;The Center for Advanced Computer Studies, University of Louisiana at Lafayette, Lafayette, USA 70503

  • Venue:
  • Journal of Signal Processing Systems
  • Year:
  • 2010

Quantified Score

Hi-index 0.01

Visualization

Abstract

With the de facto transformation of technology into nano-technology, more and more functional components can be embedded on a single silicon die, thus enabling high degree pipelining operations such as those required for multimedia applications. In recent years, system-on-chip designs have migrated from fairly simple single processor and memory designs to relatively complicated systems with multiple processors, on-chip memories, standard peripherals, and other functional blocks. The communication between these IP blocks is becoming the dominant critical system path and performance bottleneck of system-on-chip designs. Network-on-chip architectures, such as Virtual Channel (2004), Black-bus (2004), Pirate (2004), AEthereal (2005), and VICHAR (2006) architectures, emerged as promising solutions for future system-on-chip communication architecture designs. However, these existing architectures all suffer from certain problems, including high area cost and communication latency and/or low network throughput. This paper presents a novel network-on-chip architecture, Pipelining Multi-channel Central Caching, to address the shortcomings of the existing architectures. By embedding a central cache into every switch of the network, blocked head packets can be removed from the input buffers and stored in the caches temporally, thus alleviating the effect of head-of-line and deadlock problems and achieving higher network throughput and lower communication latency without paying the price of higher area cost. Experimental results showed that the proposed architecture exhibits both hardware simplicity and system performance improvement compared to the existing network-on-chip architectures.