Model oriented profiling of parallel programs

  • Authors:
  • J. A. González;C. León;J. L. Roda;C. Rodríguez;J. M. Rodríguez;F. Sande;M. Printista

  • Affiliations:
  • Dpto. Estadística, I.O. y Computación, Universidad de La Laguna, La Laguna, Spain;Dpto. Estadística, I.O. y Computación, Universidad de La Laguna, La Laguna, Spain;Dpto. Estadística, I.O. y Computación, Universidad de La Laguna, La Laguna, Spain;Dpto. Estadística, I.O. y Computación, Universidad de La Laguna, La Laguna, Spain;Dpto. Estadística, I.O. y Computación, Universidad de La Laguna, La Laguna, Spain;Dpto. Estadística, I.O. y Computación, Universidad de La Laguna, La Laguna, Spain;Universidad Nacional de San Luis, San Luis, Argentina

  • Venue:
  • EUROMICRO-PDP'02 Proceedings of the 10th Euromicro conference on Parallel, distributed and network-based processing
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

The prediction analysis model presented extends BSP to cover both oblivious synchronization and group partitioning. These generalizations imply that different processors may finish the same superstep at different times. The other consideration is that, even if the numbers of individual communication or computation operations in two stages are the same, the actual times for these two stages may differ. These differences are due to the separate nature of the operations or to the particular pattern followed by the messages. Even worse, the assumption that a constant number of machine instructions takes constant time is far from the truth. Current memory hierarchies imply that memory access vary from a few cycles to several thousands. A natural proposal is to associate a different proportionality constant with each basic block, and analogously, to associate different latencies and bandwidths with each "communication block". Unfortunately, to use this approach implies that the evaluation parameters not only depend on given architecture, but also reflect algorithm characteristics. Such parameter evaluation must be done for every algorithm. This is a heavy task, implying experiment design, timing, statistics, pattern recognition and multi-parameter fitting algorithms. Software support is required. We have developed a compiler that takes as source a C program annotated with complexity formulas and produces as output an instrumented code. The trace files obtained from the execution of the resulting code are analyzed with an interactive interpreter, giving us, among other information, the values of those parameters.