Online computation and competitive analysis
Online computation and competitive analysis
Nearly optimal FIFO buffer management for DiffServ
Proceedings of the twenty-first annual symposium on Principles of distributed computing
Loss-bounded analysis for differentiated services
Journal of Algorithms
Buffer Overflow Management in QoS Switches
SIAM Journal on Computing
Optimal smoothing schedules for real-time streams
Distributed Computing
Better online buffer management
SODA '07 Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms
Improved online algorithms for buffer management in QoS switches
ACM Transactions on Algorithms (TALG)
Hi-index | 5.23 |
We consider scheduling packets with values in a capacity-bounded buffer in an online setting. In this model, there is a buffer with limited capacity B. At any time, the buffer cannot accommodate more than B packets. Packets arrive over time. Each packet has a non-negative value. Packets leave the buffer only because they are either sent or dropped. Those packets that have left the buffer will not be reconsidered for delivery any more. In each time step, at most one packet in the buffer can be sent. The order in which the packets are sent should comply with the order of their arrival time. The objective is to maximize the total value of the packets sent in an online manner. In this paper, we study a variant of this FIFO buffering model in which a packet's value is either 1 or @a1. We present a deterministic memoryless 1.304-competitive algorithm. This algorithm has the same competitive ratio as the one presented in Lotker and Patt-Shamir [Z. Lotker, B. Patt-Shamir, Nearly optimal FIFO buffer management for DiffServ, in: Proceedings of the 21st Annual ACM Symposium on Principles of Distributed Computing, PODC, 2002, pp. 134-142; Z. Lotker, B. Patt-Shamir, Nearly optimal FIFO buffer management for DiffServ, Computer Networks 17 (1) (2003) 77-89]. However, our algorithm is simpler and does not employ any marking bits. The idea used in our algorithm is novel and different from all previous approaches that have been applied for the general model and its variants. We do not proactively preempt one packet when a new packet arrives. Instead, we may preempt more than one 1-value packet at the time when the buffer contains sufficiently many @a-value packets.