Worst-case execution time analysis on modern processors

  • Authors:
  • Kelvin D. Nilsen;Bernt Rygg

  • Affiliations:
  • Department of Computer Science, Iowa State University, 226 Atanasoff Hall, Ames, IA;Department of Computer Science, Iowa State University, 226 Atanasoff Hall, Ames, IA

  • Venue:
  • LCTES '95 Proceedings of the ACM SIGPLAN 1995 workshop on Languages, compilers, & tools for real-time systems
  • Year:
  • 1995

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many of the trends that have dominated recent evolution and advancement within the computer architecture community have complicated the analysis of task execution times. Most of the difficulties result from two particular emphases: (1) Instruction-level parallelism, and (2) Optimization of average-case behavior rather than worst-case latencies. Both of these trends have resulted in increased nondeterminism in the time required to execute particular code sequences. And since the analysis required to determine worst-case task execution times on modern processors is so complicated, it is not practical for programmers to perform all of the necessary analyses by hand. Tools must aid them. This paper describes ongoing research on a collection of tools intended to automate the analyses that must be performed in order to build reliable real-time software for modern computing environments. Emphasis is given to the interplay between components of the development environment.