Reducing the Variance of Point-to-Point Transfers for Parallel Real-Time Programs

  • Authors:
  • Ronald Mraz

  • Affiliations:
  • -

  • Venue:
  • IEEE Parallel & Distributed Technology: Systems & Technology
  • Year:
  • 1994

Quantified Score

Hi-index 0.00

Visualization

Abstract

Investigations that analyze the time an operating system takes to schedule, interrupt and "context-switch" to another process or job have helped developers produce highly optimized and tuned operating systems that can provide more than 99% sustained processor use for most uniprocessor applications. However, when these operating systems are installed on CPUs that are interconnected with a low-latency (user-space) communication mechanism, large variances typically occur in the time it takes to send a point-to-point message. In this article, we examine how to reduce the difference between worst-case and average-case message latency that can contribute to variance in fine-grain parallel programs. Changing how the operating system handles interrupt processing and scheduling can greatly reduce the difference between these latencies, thus increasing a program's performance.