Managing Very-Large Distributed Datasets

  • Authors:
  • Miguel Branco;Ed Zaluska;David Roure;Pedro Salgado;Vincent Garonne;Mario Lassnig;Ricardo Rocha

  • Affiliations:
  • CERN - European Organization for Nuclear Research, University of Southampton,UK,University of Innsbruck, Austria;CERN - European Organization for Nuclear Research, University of Southampton,UK,University of Innsbruck, Austria;CERN - European Organization for Nuclear Research, University of Southampton,UK,University of Innsbruck, Austria;CERN - European Organization for Nuclear Research, University of Southampton,UK,University of Innsbruck, Austria;CERN - European Organization for Nuclear Research, University of Southampton,UK,University of Innsbruck, Austria;CERN - European Organization for Nuclear Research, University of Southampton,UK,University of Innsbruck, Austria;CERN - European Organization for Nuclear Research, University of Southampton,UK,University of Innsbruck, Austria

  • Venue:
  • OTM '08 Proceedings of the OTM 2008 Confederated International Conferences, CoopIS, DOA, GADA, IS, and ODBASE 2008. Part I on On the Move to Meaningful Internet Systems:
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we introduce a system for handling very large datasets, which need to be stored across multiple computing sites. Data distribution introduces complex management issues, particularly as computing sites may make use of different storage systems with different internal organizations. The motivation for our work is the ATLAS Experiment for the Large Hadron Collider (LHC) at CERN, where the authors are involved in developing the data management middleware. This middleware, called DQ2, is charged with shipping petabytes of data every month to research centers and universities worldwide and has achieved aggregate throughputs in excess of 1.5 Gbytes/sec over the wide-area network. We describe DQ2's design and implementation, which builds upon previous work on distributed file systems, peer-to-peer systems and Data Grids. We discuss its fault tolerance and scalability properties and briefly describe results from its daily usage for the ATLAS Experiment.