Analysis and reduction of data spikes in thin client computing

  • Authors:
  • Yang Sun;Teng Tiow Tay

  • Affiliations:
  • Department of Electrical and Computer Engineering, National University of Singapore, Singapore;Department of Electrical and Computer Engineering, National University of Singapore, Singapore

  • Venue:
  • Journal of Parallel and Distributed Computing
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

While various optimization techniques have been used in existing thin client systems to reduce network traffic, the screen updates triggered by many user operations will still result in long interactive latencies in many contemporary network environments. Long interactive latencies have an unfavorable effect on users' perception of graphical interfaces and visual contents. The long latencies arise when data spikes need to be transferred over a network while the available bandwidth is limited. These data spikes are composed of a large amount of screen update data produced in a very short time. In this paper, we propose a model to analyze the packet-level redundancy in screen update streams caused by repainting of graphical objects. Using this model we analyzed the data spikes in screen update streams. Based on the analysis result we designed a hybrid cache-compression scheme. This scheme caches the screen updates in data spikes on both server and client sides, and uses the cached data as history to better compress the recurrent screen updates in possible data spikes. We empirically studied the effectiveness of our cache scheme on some screen updates generated by one of the most bandwidth-efficient thin client system, Microsoft Terminal Service. The experiment results showed that this cache scheme with a cache of 2M bytes can reduce 26.7%-42.2% data spike count and 9.9%-21.2% network traffic for the tested data, and can reduce 25.8%-38.5% noticeable long latencies for different types of applications. This scheme costs only a little additional computation time and the cache size can be negotiated between the client and server.