Symbolic analysis for parallelizing compilers

  • Authors:
  • Mohammad R. Haghighat;Constantine D. Polychronopoulos

  • Affiliations:
  • -;-

  • Venue:
  • ACM Transactions on Programming Languages and Systems (TOPLAS)
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

The notion of dependence captures that most important properties of a program for efficient execution on parallel computers. The dependence structure of a program defines that necessary constraints of the order of execution of the program components and provides sufficient information for the exploitation of the available parallelism. Static discovery and management of the dependence structure of programs save a tremendous amount of execution time, and dynamic utilization of dependence information results in a significant performance gain on parallel computers. However, experiments with parallel computers indicate that existing multiprocessing environments are unable to deliver the desired performance over a wide range of real applicatins, mainly due to lack of precision of their dependence information. This calls for an effective compilation scheme capable of understanding the dependence structure of complicated application programs. This article describes a methodology for capturing analyzing program properties that are essential in the effective detection and efficient exploitation of parallelism on parallel computers. Based on this methodology, a symbolic analysis framework is developed for the Parafrase-2 parallelizing compiler. This framework extends the scope of a variety of important program analysis problems and solves them in a unified way. The attained solution space of these problems is much larger than that handled by existing compiler technology. Such a powerful approach is required for the effective compilation of a large class of application programs. programs.