Process control and scheduling issues for multiprogrammed shared-memory multiprocessors
SOSP '89 Proceedings of the twelfth ACM symposium on Operating systems principles
Finding Idle Machines in a Workstation-Based Distributed System
IEEE Transactions on Software Engineering
Transparent process migration: design alternatives and the sprite implementation
Software—Practice & Experience
Performance analysis of job scheduling policies in parallel supercomputing environments
Proceedings of the 1993 ACM/IEEE conference on Supercomputing
Scheduling and page migration for multiprocessor compute servers
ASPLOS VI Proceedings of the sixth international conference on Architectural support for programming languages and operating systems
An analysis of decay-usage scheduling in multiprocessors
Proceedings of the 1995 ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems
A Case for NOW (Networks of Workstations)
IEEE Micro
A taxonomy of scheduling in general-purpose distributed computing systems
IEEE Transactions on Software Engineering
Achieving Service Rate Objectives with Decay Usage Scheduling
IEEE Transactions on Software Engineering
Fat-tree for local area multiprocessors
IPPS '95 Proceedings of the 9th International Symposium on Parallel Processing
Hi-index | 0.00 |
Local Area MultiProcessors (LAMP) is a network of personal workstations with distributed shared physical memory provided by high performance technologies such as SCI (Scalable Coherent Interface). LAMP is more tightly coupled than the traditional local area networks (LAN) but is more loosely coupled than the bus based multiprocessors. This paper presents a distributed scheduling algorithm which exploits the distributed shared memory in SCI-LAMP to schedule the idle remote processors among the requesting workstations. It considers fairness by allocating remote processing capacity to the requesting workstations based on their priorities according to the decay-usage scheduling approach. The performance of the algorithm in scheduling both sequential and parallel jobs is evaluated by simulation. It is found that the higher priority nodes achieve faster job response times and higher speedups than that of the lower priority nodes. Lower scheduling overhead in SCI-LAMP allows finer granularity of remote processors sharing than in LAN.