Receiving message prediction method

  • Authors:
  • Yoshiyuki Iwamoto;Koichi Suga;Kanemitsu Ootsu;Takashi Yokota;Takanobu Baba

  • Affiliations:
  • Nasu-Seiho High School;Hitachi Business Solution Co., Ltd.;Department of Information Science, Faculty of Engineering, Utsunomiya University, 7-1-2 Yoto, Utsunomiya, Tochigi 321-8585, Japan;Department of Information Science, Faculty of Engineering, Utsunomiya University, 7-1-2 Yoto, Utsunomiya, Tochigi 321-8585, Japan;Department of Information Science, Faculty of Engineering, Utsunomiya University, 7-1-2 Yoto, Utsunomiya, Tochigi 321-8585, Japan

  • Venue:
  • Parallel Computing - Special issue: Parallel and distributed scientific and engineering computing
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes and evaluates the Receiving Message Prediction Method for high performance message passing. In this method, a node in the idle state predicts the next message reception, and speculatively executes the message reception and user processes. This method is independent of underlying computer architecture and message passing libraries. We propose the algorithms for the message prediction, and evaluate them from the viewpoint of the success ratio and speed-ups. We use the NAS parallel benchmark programs as typical parallel applications running on two different types of parallel platforms, i.e., a workstation cluster and a shared memory multiprocessor. The experimental results show that the method can be applied to various platforms. The method can also be implemented just by changing the software inside their message passing libraries without any support from the underlying system software or hardware. This mean that we do not require any change of application software that uses the libraries. The application of the method to the message passing interface libraries achieves a speed-up of 6.8% for the NAS Parallel Benchmarks, and the static and dynamic selection of prediction methods based on profiling results improve the performance.