A DSM-based fragmented data sharing framework for grids

  • Authors:
  • Po-Cheng Chen;Jyh-Biau Chang;Ce-Kuen Shieh;Chia-Han Lin;Yi-Chang Zhuang

  • Affiliations:
  • Institute of Computer and Communication Engineering, Department of Electrical Engineering, National Cheng Kung University, No. 1, Ta-Hsueh Road, Tainan City 701, Taiwan, ROC;Department of Digital Applications, Leader University, No. 188, Sec. 5, Au-Chung Road, Tainan City 709, Taiwan, ROC;Institute of Computer and Communication Engineering, Department of Electrical Engineering, National Cheng Kung University, No. 1, Ta-Hsueh Road, Tainan City 701, Taiwan, ROC;Institute of Computer and Communication Engineering, Department of Electrical Engineering, National Cheng Kung University, No. 1, Ta-Hsueh Road, Tainan City 701, Taiwan, ROC;Home Network Technology Center, Industrial Technology Research Institute/South, No. 31, Gongye 2nd Road, Annan District, Tainan City 709, Taiwan, ROC

  • Venue:
  • Future Generation Computer Systems
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Sharing scientific and data capture files of gigabyte and terabyte size in conventional data grid systems is inefficient because conventional approaches copy the entire shared file to a user's local storage even when only a tiny file fragment is required. Such transfer schemes consume unnecessary data transmission time and local storage space, with the additional problem of maintaining replica synchronization. Traditionally, replica consistency treats shared files as read-only, consequently sacrificing guaranteed replica consistency. This paper presents a DSM-based fragmented data sharing framework called ''Spigot'' which transfers only the necessary fragments of large files on user demand, thereby reducing data transmission time, wasted network bandwidth and required storage space. Data waiting time is further reduced by overlapping data transmission and data analysis. The DSM concept maintains replica synchronization. Real experiments show reduced turnaround time in data-intensive applications, particularly when fragment size is low and analysis time and network latency are high.