Enhancement of anticipative recursively adjusting mechanism for redundant parallel file transfer in data grids

  • Authors:
  • Chao-Tung Yang;Ming-Feng Yang;Wen-Chung Chiang

  • Affiliations:
  • High-Performance Computing Laboratory, Department of Computer Science, Tunghai University, Taichung, Taiwan, ROC;High-Performance Computing Laboratory, Department of Computer Science, Tunghai University, Taichung, Taiwan, ROC;Department of Information Networking Technology, Hsiuping Institute of Technology, Taichung County, Taiwan, ROC

  • Venue:
  • Journal of Network and Computer Applications
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Co-allocation architectures can be used to enable parallel transfers of data file from multiple replicas in data grids which are stored at different grid sites. Schemes based on co-allocation models have been proposed and used to exploit the different transfer rates among various client-server network links and to adapt to dynamic rate fluctuations by dividing data into fragments. These schemes show that the more fragments used the more performance. In fact, some schemes can be applied to specific situations; however, most situations are not common actually. For example, how many blocks in a data set should be cut? For this issue, we proposed the anticipative recursively adjusting mechanism (ARAM) in a previous research work. Its best feature is performance tuning through alpha value adjustment. It relies on special features to adapt to various network situations in data grid environments. In this paper, the TCP Bandwidth Estimation Model (TCPBEM) is used to evaluate dynamic link states by detecting TCP throughputs and packet lost rates between grid nodes. We integrated the model into ARAM, calling the result the anticipative recursively adjusting mechanism plus (ARAM+); it can be more reliable and reasonable than its predecessor. We also designed a Burst Mode (BM) that increases ARAM+ transfer rates. This approach not only adapts to the worst network links, but also speeds up overall performance.