Architectural support for real-time task scheduling in SMT processors

  • Authors:
  • Francisco J. Cazorla;Peter M. W. Knijnenburg;Rizos Sakellariou;Enrique Fernández;Alex Ramirez;Mateo Valero

  • Affiliations:
  • Universitat Politènica de Catalunya and Barcelona Supercomputing Center, Barcelona, Spain;Leiden University, The Netherlands;University of Manchester, United Kingdom;Universidad de Las Palmas, de Gran Canaria. Spain;Universitat Politènica de Catalunya and Barcelona Supercomputing Center, Barcelona, Spain;Universitat Politènica de Catalunya and Barcelona Supercomputing Center, Barcelona, Spain

  • Venue:
  • Proceedings of the 2005 international conference on Compilers, architectures and synthesis for embedded systems
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In Simultaneous Multithreaded (SMT) architectures most hardware resources are shared between threads. This provides a good cost/performance trade-off which renders these architectures suitable for use in embedded systems. However, since threads share many resources, they also interfere with each other. As a result, execution times of applications become highly unpredictable and dependent on the context in which an application is executed. Obviously, this poses problems if an SMT is to be used in a real-time system.In this paper, we propose two novel hardware mechanisms that can be used to reduce this performance variability. In contrast to previous approaches, our proposed mechanisms do not need any information beyond the information already known by traditional job schedulers. Nor do they require extensive profiling of workloads to determine optimal schedules. Our mechanisms are based on dynamic resource partitioning. The OS level job scheduler needs to be slightly adapted in order to provide the hardware resource allocator some information on how this resource partitioning needs to be done. We show that our mechanisms provide high stability for SMT architectures to be used in real-time systems: the real time benchmarks we used meet their deadlines in more than 98% of the cases considered while the other thread in the workload still achieves high throughput.