Managing Petabyte-Scale Storage for the ATLAS Tier-1 Centre at TRIUMF

  • Authors:
  • Denice Deatrich;Simon Liu;Chris Payne;Réda Tafirout;Rodney Walker;Andrew Wong;Michel Vetterli

  • Affiliations:
  • -;-;-;-;-;-;-

  • Venue:
  • HPCS '08 Proceedings of the 2008 22nd International Symposium on High Performance Computing Systems and Applications
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

The ATLAS experiment at the Large Hadron Collider (LHC), located in Geneva, will collect 3 to 4 petabytes or PB (10^15 bytes) of data for each year of its operation, when fully commissioned. Secondary data sets resulting from event reconstruction, reprocessing and calibration will result in an additional 2.5 PB for each year of data taking. Simulated data sets require also significant resources as well nearing 1 PB per year. The data will be distributed worldwide to ten Tier-1 computing centres within the Worldwide LHC Computing Grid (WLCG) that will operate around the clock. One of these centres is hosted at TRIUMF, Canada's National Laboratory for Particle and Nuclear Physics, located in Vancouver, BC. By the year 2010, the storage capacity at TRIUMF will consist of about 3 Petabyte of disk storage, and 2 PB of tape storage. At present, the disk capacity installed is 750 terabytes or TB (10^12 bytes) while the tape capacity is 560 TB, both using state of the art technology. dCache from www.dcache.org is used to manage the entire storage in order to provide a common file namespace. It is a highly scalable solution and highly configurable. In this paper we will describe and review the storage infrastructure and configuration currently in place at the Tier-1 centre at TRIUMF for both disk and tape as well as the management software and tools that have been developed.