Maximizing the throughput of tandem lines with flexible failure-prone servers and finite buffers

  • Authors:
  • Sigrú/n Andradó/ttir;Hayriye Ayhan;Douglas g. Down

  • Affiliations:
  • H. milton stewart school of industrial and systems engineering georgia institute of technologyatlanta, ga 30332-0205 e-mail: sa@isye.gatech.edu/ hayhan@isye.gatech.edu;H. milton stewart school of industrial and systems engineering georgia institute of technologyatlanta, ga 30332-0205 e-mail: sa@isye.gatech.edu/ hayhan@isye.gatech.edu;Department of computing and softwaremcmaster universityhamilton, ontario l8s 4l7, canada e-mail: downd@mcmaster.edu

  • Venue:
  • Probability in the Engineering and Informational Sciences
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Consider a tandem queuing network with an infinite supply of jobs in front of the first station, infinite room for completed jobs after the last station, finite buffers between stations, and a number of flexible servers who are subject to failures. We study the dynamic assignment of servers to stations with the goal of maximizing the long-run average throughput. Our main conclusion is that the presence of server failures does not have a major impact on the optimal assignment of servers to stations for the systems we consider. More specifically, we show that when the servers are generalists, any nonidling policy is optimal, irrespective of the reliability of the servers. We also provide theoretical and numerical results for Markovian systems with two stations and two or three servers that suggest that the structure of the optimal server assignment policy does not depend on the reliability of the servers and that ignoring server failures when assigning servers to stations yields near-optimal throughput. Finally, we present numerical results that illustrate that simple server assignment heuristics designed for larger systems with reliable servers also yield good throughput performance in Markovian systems with three stations and three failure-prone servers.