Real-time computing systems: the next generation
Tutorial: hard real-time systems
Concrete mathematics: a foundation for computer science
Concrete mathematics: a foundation for computer science
IEEE Transactions on Computers
Reducing the variance of point to point transfers in the IBM 9076 parallel computer
Proceedings of the 1994 ACM/IEEE conference on Supercomputing
Architecture and Implementation of Vulcan
Proceedings of the 8th International Symposium on Parallel Processing
Designing and Implementing High-Performance Media-on-Demand Servers
IEEE Parallel & Distributed Technology: Systems & Technology
Communication performance issues for two cluster computers
ACSC '03 Proceedings of the 26th Australasian computer science conference - Volume 16
Parallel job scheduling — a status report
JSSPP'04 Proceedings of the 10th international conference on Job Scheduling Strategies for Parallel Processing
Hi-index | 0.00 |
Investigations that analyze the time an operating system takes to schedule, interrupt and "context-switch" to another process or job have helped developers produce highly optimized and tuned operating systems that can provide more than 99% sustained processor use for most uniprocessor applications. However, when these operating systems are installed on CPUs that are interconnected with a low-latency (user-space) communication mechanism, large variances typically occur in the time it takes to send a point-to-point message. In this article, we examine how to reduce the difference between worst-case and average-case message latency that can contribute to variance in fine-grain parallel programs. Changing how the operating system handles interrupt processing and scheduling can greatly reduce the difference between these latencies, thus increasing a program's performance.