Parallel computing: theory and comparisons
Parallel computing: theory and comparisons
Interfacing Condor and PVM to harness the cycles of workstation clusters
Future Generation Computer Systems - Special issue: resource management in distributed systems
Moore's law: past, present, and future
IEEE Spectrum
GLUnix: a global layer Unix for a network of workstations
Software—Practice & Experience - Special issue on multiprocessor operating systems
The Hector Distributed Run-Time Environment
IEEE Transactions on Parallel and Distributed Systems
The elusive goal of workload characterization
ACM SIGMETRICS Performance Evaluation Review
Evaluating the Scalability of Distributed Systems
IEEE Transactions on Parallel and Distributed Systems
Incremental Design of Scalable Interconnection Networks Using Basic Building Blocks
IEEE Transactions on Parallel and Distributed Systems
Using multicast to pre-load jobs on the ParPar cluster
Parallel Computing
IEEE Transactions on Parallel and Distributed Systems
A parallel workload model and its implications for processor allocation
Cluster Computing
Isoefficiency: Measuring the Scalability of Parallel Algorithms and Architectures
IEEE Parallel & Distributed Technology: Systems & Technology
IPPS '99/SPDP '99 Proceedings of the 13th International Symposium on Parallel Processing and the 10th Symposium on Parallel and Distributed Processing
The ANL/IBM SP Scheduling System
IPPS '95 Proceedings of the Workshop on Job Scheduling Strategies for Parallel Processing
Packing Schemes for Gang Scheduling
IPPS '96 Proceedings of the Workshop on Job Scheduling Strategies for Parallel Processing
Scheduling for Parallel Supercomputing: A Historical Perspective of Achievable Utilization
IPPS/SPDP '99/JSSPP '99 Proceedings of the Job Scheduling Strategies for Parallel Processing
Valuation of Ultra-scale Computing Systems
IPDPS '00/JSSPP '00 Proceedings of the Workshop on Job Scheduling Strategies for Parallel Processing
Core Algorithms of the Maui Scheduler
JSSPP '01 Revised Papers from the 7th International Workshop on Job Scheduling Strategies for Parallel Processing
Implementation of Gang-Scheduling on Workstation Cluster
IPPS '96 Proceedings of the Workshop on Job Scheduling Strategies for Parallel Processing
STORM: lightning-fast resource management
Proceedings of the 2002 ACM/IEEE conference on Supercomputing
IPDPS '03 Proceedings of the 17th International Symposium on Parallel and Distributed Processing
IEEE Transactions on Parallel and Distributed Systems
The Supercomputer Industry in Light of the Top500 Data
Computing in Science and Engineering
On the Interpretation of Top500 Data
International Journal of High Performance Computing Applications
Detection workload in a dynamic grid-based intrusion detection environment
Journal of Parallel and Distributed Computing
Hi-index | 0.00 |
Scalability of clusters and MPPs is typically discussed in terms of limits on growth: something which grows at a rate of O(log p) (where p is the number of processors) is said to be more scalable than something whose growth rate is O(p). But in practice p does not grow without limits. We therefore suggest that discussions of scalability should take time into account. System sizes grow with time, so larger systems need to be supported - but only after some time. And in particular, there is no real need to support arbitrarily large systems right now. Surprisingly, when time is thus put into the picture, we find that centralized control is actually quite scalable. The reason is that the capabilities of a centralized control node grow at a fast pace due to Moore's law. This seems to be more than enough in order to manage current growth patterns displayed by parallel systems.