Fundamentals of queueing theory (2nd ed.).
Fundamentals of queueing theory (2nd ed.).
I/O issues in a multimedia system
Computer
Staggered striping in multimedia information systems
SIGMOD '94 Proceedings of the 1994 ACM SIGMOD international conference on Management of data
Overlay striping and optimal parallel I/O for modern applications
Parallel Computing - Special issues on applications: parallel data servers and applications
A case for intelligent disks (IDISKs)
ACM SIGMOD Record
Active Storage for Large-Scale Data Mining and Multimedia
VLDB '98 Proceedings of the 24rd International Conference on Very Large Data Bases
Disk striping in video server environments
ICMCS '96 Proceedings of the 1996 International Conference on Multimedia Computing and Systems
Buffer Management For Continuous Media Sharing In Multimedia Databse Systems
Buffer Management For Continuous Media Sharing In Multimedia Databse Systems
Data sharing in interactive continuous media servers
Data sharing in interactive continuous media servers
Prefetching into Smart-Disk Caches for High Performance Media Servers
ICMCS '99 Proceedings of the IEEE International Conference on Multimedia Computing and Systems - Volume 2
Saving disk energy in video servers by combining caching and prefetching
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP) - Special issue of best papers of ACM MMSys 2013 and ACM NOSSDAV 2013
Hi-index | 0.00 |
Network continuous-media applications are emerging with a great pace. Cache memories have long been recognized as a key resource (along with network bandwidth) whose intelligent exploitation can ensure high performance for such applications. Cache memories exist at the continuous-media servers and their proxy servers in the network. Within a server, cache memories exist in a hierarchy (at the host, the storage-devices, and at intermediate multi-device controllers). Our research is concerned with how to best exploit these resources in the context of continuous media servers and in particular, how to best exploit the available cache memories at the drive, the disk array controller, and the host levels. Our results determine under which circumstances and system configurations it is preferable to devote the available memory to traditional caching (a.k.a. “data sharing”) techniques as opposed to prefetching techniques. In addition, we show how to configure the available memory for optimal performance and optimal cost. Our results show that prefetching techniques are preferable for small-size caches (such as those expected at the drive level). For very large caches (such as those employed at the host level) caching techniques are preferable. For intermediate cache sizes (such as those at multi-device controllers) a combination of both strategies should be employed.