Implementing global memory management in a workstation cluster
SOSP '95 Proceedings of the fifteenth ACM symposium on Operating systems principles
The internet backplane protocol: a study in resource sharing
Future Generation Computer Systems - Selected papers from CCGRID 2002
Reliable Blast UDP: Predictable High Performance Bulk Data Transfer
CLUSTER '02 Proceedings of the IEEE International Conference on Cluster Computing
Dodo: A User-Level System for Exploiting Idle Memory in Workstation Clusters
HPDC '99 Proceedings of the 8th IEEE International Symposium on High Performance Distributed Computing
An Exploration of Network RAM
CMV '03 Proceedings of the conference on Coordinated and Multiple Views In Exploratory Visualization
TeraScope: distributed visual data mining of terascale data sets over photonic networks
Future Generation Computer Systems - iGrid 2002
Communications of the ACM - Blueprint for the future of high-performance networking
Data-intensive e-science frontier research
Communications of the ACM - Blueprint for the future of high-performance networking
JuxtaView - a tool for interactive visualization of large imagery on scalable tiled displays
CLUSTER '04 Proceedings of the 2004 IEEE International Conference on Cluster Computing
Advances, Applications and Performance of the Global Arrays Shared Memory Programming Toolkit
International Journal of High Performance Computing Applications
Real-time multi-scale brain data acquisition, assembly, and analysis using an end-to-end OptIPuter
Future Generation Computer Systems - IGrid 2005: The global lambda integrated facility
Distributing the Sloan Digital Sky Survey Using UDT and Sector
E-SCIENCE '06 Proceedings of the Second IEEE International Conference on e-Science and Grid Computing
Collective caching: application-aware client-side file caching
HPDC '05 Proceedings of the High Performance Distributed Computing, 2005. HPDC-14. Proceedings. 14th IEEE International Symposium
Performance analysis of a user-level memory server
CLUSTER '07 Proceedings of the 2007 IEEE International Conference on Cluster Computing
Distributed anemone: transparent low-latency access to remote memory
HiPC'06 Proceedings of the 13th international conference on High Performance Computing
Editorial: Special section: OptIPlanet - The OptIPuter global collaboratory
Future Generation Computer Systems
Modeling resource-coupled computations
Proceedings of the 2009 Workshop on Ultrascale Visualization
Accelerating I/O Forwarding in IBM Blue Gene/P Systems
Proceedings of the 2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis
ISOBAR hybrid compression-I/O interleaving for large-scale parallel I/O optimization
Proceedings of the 21st international symposium on High-Performance Parallel and Distributed Computing
Hi-index | 0.00 |
Data-intensive scientific applications require rapid access to local and geographically distributed data, however, there are significant I/O latency bottlenecks associated with storage systems and wide-area networking. LambdaRAM is a high-performance, multi-dimensional, distributed cache, that takes advantage of memory from multiple clusters interconnected by ultra-high-speed networking, to provide applications with rapid access to both local and remote data. It mitigates latency bottlenecks by employing proactive latency-mitigation heuristics based on an application's access patterns. We present results using LambdaRAM to rapidly stride through remote multi-dimensional NASA Modeling, Analysis and Prediction (MAP) 2006 project datasets, based on time and geographical coordinates, to compute wind shear for cyclone and hurricane and tropical cyclone analysis. Our current experiments have demonstrated up to a 20-fold speedup in the computation of wind shear with LambdaRAM.