Dynamic batching policies for an on-demand video server
Multimedia Systems
On optimal piggyback merging policies for video-on-demand systems
Proceedings of the 1996 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
Group-guaranteed channel capacity in multimedia storage servers
SIGMETRICS '97 Proceedings of the 1997 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
Scheduling video programs in near video-on-demand systems
MULTIMEDIA '97 Proceedings of the fifth ACM international conference on Multimedia
Prospects for Interactive Video-on-Demand
IEEE MultiMedia
Distance Learning with Digital Video
IEEE MultiMedia
The Split and Merge Protocol for Interactive Video-on-Demand
IEEE MultiMedia
On G-Networks and Resource Allocation in Multimedia Systems
RIDE '98 Proceedings of the Workshop on Research Issues in Database Engineering
Long Term Resource Allocation in Video Delivery Systems
INFOCOM '97 Proceedings of the INFOCOM '97. Sixteenth Annual Joint Conference of the IEEE Computer and Communications Societies. Driving the Information Revolution
Chaining: A Generalized Batching Technique for Video-On-Demand Systems
ICMCS '97 Proceedings of the 1997 International Conference on Multimedia Computing and Systems
Qos-based design of high-speed networks
Qos-based design of high-speed networks
On Optimal Batching Policies for Video-on-Demand Storage Servers
ICMCS '96 Proceedings of the 1996 International Conference on Multimedia Computing and Systems
Hi-index | 0.24 |
In near video-on-demand (near-VoD), requests for a video title are grouped together (i.e. batched) and are served with a single multicast stream, thereby increasing the number of concurrent users which can be supported by the system. Since users may not be able to tolerate the delay incurred by batching and hence cancel their requests, a batching policy should be designed so as to achieve low user loss and high revenue (given by the total pay-per-view (PPV) collected over a long period of time across all movies). We propose an adaptive batching policy which offers users low delay at low arrival rate, and gates the allocation of the channels at high rate. Such adaptivity is achieved by the use of a simple 'token-tray' (TT) scheme, etc., which governs when a stream may be allocated to a movie. In assigning a movie to a stream, we propose a weight function which depends on the user queuing-time and its PPV (hence the term 'weighted queuing-time' (WQT). By comparing our batching policy (TT/WQT) with a number of traditional ones (first-come-first-served (FCFS), forced-wait (FW), batch-size-based (BSB), scheme, etc.), our scheme is shown to achieve the highest revenue and lowest loss rate even when the arrival rate changes, with the user loss rate across the movies being fairly uniform, and the user delay being fairly low even at high arrival rate.