ggplot2: Elegant Graphics for Data Analysis
ggplot2: Elegant Graphics for Data Analysis
Scheduling a 100,000 Core Supercomputer for Maximum Utilization and Capability
ICPPW '10 Proceedings of the 2010 39th International Conference on Parallel Processing Workshops
Trestles: a high-productivity HPC system targeted to modest-scale and gateway users
Proceedings of the 2011 TeraGrid Conference: Extreme Digital Discovery
Deep and wide metrics for HPC resource capability and project usage
State of the Practice Reports
Visualization at Supercomputing Centers: The Tale of Little Big Iron and the Three Skinny Guys
IEEE Computer Graphics and Applications
Hi-index | 0.00 |
The National Institute for Computational Sciences (NICS) at the University of Tennessee currently operates two computational resources for the eXtreme Science and Engineering Discovery Environment (XSEDE), Kraken, a 112,896-core Cray XT5 for general purpose computation, and Nautilus, a 1,024-core SGI Altix UV 1000 for data analysis and visualization. We analyze a year's worth of accounting logs for Kraken and Nautilus to understand how users take advantage of these two systems and how analysis jobs differ from general HPC computation We find that researchers take advantage of the flexibility offered by these systems, running a wide variety of jobs at many scales and using the full range of core counts and available memory for their jobs. The jobs on Nautilus tend to use less walltime and more memory per core than the jobs run on Kraken. Additionally, researchers are more likely to run interactive jobs on Nautilus than on Kraken. Small jobs experience a good quality of service on both systems. This information can be used for the management and allocation of time on existing HPC and analysis systems as well as for planning for deploying future HPC and analysis systems.