Power-performance analysis of networks-on-chip with arbitrary buffer allocation schemes

  • Authors:
  • Mohammad Arjomand;Hamid Sarbazi-Azad

  • Affiliations:
  • Department of Computer Engineering, Sharif University of Technology, Tehran, Iran;Department of Computer Engineering, Sharif University of Technology, Tehran, Iran and IPM School of Computer Science, Tehran, Iran

  • Venue:
  • IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems - Special section on the ACM IEEE international conference on formal methods and models for codesign (MEMOCODE) 2009
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

End-to-end delay, throughput, energy consumption, and silicon area are the most important design metrics of networks-on-chip (NoCs). Although several analytical models have been previously proposed for predicting such metrics in NoCs, very few of them consider the effect of message waiting time in the buffers of network routers for predicting overall power consumptions and none of them consider structural heterogeneity of network routers. This paper introduces two inter-related analytical models to compute message latency and power consumption of NoCs with arbitrary topology, buffering structure, and routing algorithm. Buffer allocation scheme defines the buffering space for each individual channel of the NoC that can be homogenous (all channels having similar buffer structures) or heterogeneous (each channel having its own buffer structure). Here, the buffer allocation scheme can be either homogenous or heterogeneous. We assume no bandwidth sharing of virtual channels for a physical channel, and IP cores generate messages following a Poisson distribution. The results obtained from simulation experiments confirm that the proposed models exhibit acceptable accuracy for different network configurations operating under various working conditions. We have shown that basing our analysis on a Poisson traffic model is still useful for scenarios with real application workloads.