Beyond Performance Tools: Measuring and Modeling Productivity in HPC
SE-HPC '07 Proceedings of the 3rd International Workshop on Software Engineering for High Performance Computing Applications
Workstation capacity tuning using reinforcement learning
Proceedings of the 2007 ACM/IEEE conference on Supercomputing
A simulation toolkit to investigate the effects of grid characteristics on workflow completion time
Proceedings of the 4th Workshop on Workflows in Support of Large-Scale Science
Hi-index | 0.00 |
This BOF will continue debate revolving around productivity metrics for supercomputers. At several recent user forums, consensus emerged that it is not possible to develop petascale applications without interactive access to thousands of processors. But most large systems are managed via a batch scheduler with long (and unpredictable) queue wait times. Most batch scheduler policies assume high system utilization as "good". But high utilization dilates average queue wait time and increases wait-time unpredictability, both of which are "bad" for application developer's productivity. What are the options to address these conflicting implications for running a supercomputer at high system utilization? Is it possible to manage a supercomputer to meet the high-throughput demands of stable applications and the on-demand access requirements of large-scale code developers concurrently? Or do these two usage scenarios inherently conflict? Participants will explain and debate several creative solutions that could enable high throughput and high availability for program development.