Technical Note: \cal Q-Learning
Machine Learning
A hierarchial CPU scheduler for multimedia operating systems
OSDI '96 Proceedings of the second USENIX symposium on Operating systems design and implementation
The design, implementation and evaluation of SMART: a scheduler for multimedia applications
Proceedings of the sixteenth ACM symposium on Operating systems principles
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Dynamic Power Management for Nonstationary Service Requests
IEEE Transactions on Computers
An Evaluation of Linear Models for Host Load Prediction
HPDC '99 Proceedings of the 8th IEEE International Symposium on High Performance Distributed Computing
PARM: Power Aware Reconfigurable Middleware
ICDCS '03 Proceedings of the 23rd International Conference on Distributed Computing Systems
Integrated power management for video streaming to mobile handheld devices
MULTIMEDIA '03 Proceedings of the eleventh ACM international conference on Multimedia
Hierarchical Adaptive Dynamic Power Management
IEEE Transactions on Computers
A Cross-Layer Approach for Power-Performance Optimization in Distributed Mobile Systems
IPDPS '05 Proceedings of the 19th IEEE International Parallel and Distributed Processing Symposium (IPDPS'05) - Workshop 10 - Volume 11
GRACE-1: Cross-Layer Adaptation for Multimedia Quality and Battery Energy
IEEE Transactions on Mobile Computing
Building Effective Multivendor Autonomic Computing Systems
IEEE Distributed Systems Online
Coordinating Multiple Autonomic Managers to Achieve Specified Power-Performance Tradeoffs
ICAC '07 Proceedings of the Fourth International Conference on Autonomic Computing
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Adaptive load balancing: a study in multi-agent learning
Journal of Artificial Intelligence Research
Towards a general framework for cross-layer decision making in multimedia systems
IEEE Transactions on Circuits and Systems for Video Technology
First-order decision-theoretic planning in structured relational environments
First-order decision-theoretic planning in structured relational environments
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
Complexity Model Based Proactive Dynamic Voltage Scaling for Video Decoding Systems
IEEE Transactions on Multimedia
QoS-aware middleware for ubiquitous and heterogeneous environments
IEEE Communications Magazine
Policy optimization for dynamic power management
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
Power-rate-distortion analysis for wireless video communication under energy constraints
IEEE Transactions on Circuits and Systems for Video Technology
Adaptive Linear Prediction for Resource Estimation of Video Decoding
IEEE Transactions on Circuits and Systems for Video Technology
IEEE Transactions on Circuits and Systems for Video Technology
Hi-index | 0.01 |
In our previous work, we proposed a systematic cross-layer framework for dynamic multimedia systems, which allows each layer to make autonomous and foresighted decisions that maximize the system's long-term performance, while meeting the application's real-time delay constraints. The proposed solution solved the cross-layer optimization offline, under the assumption that the multimedia system's probabilistic dynamics were known a priori, by modeling the system as a layered Markov decision process. In practice, however, these dynamics are unknown a priori and, therefore, must be learned online. In this paper, we address this problem by allowing the multimedia system layers to learn, through repeated interactions with each other, to autonomously optimize the system's long-term performance at run-time. The two key challenges in this layered learning setting are: (i) each layer's learning performance is directly impacted by not only its own dynamics, but also by the learning processes of the other layers with which it interacts; and (ii) selecting a learning model that appropriately balances time-complexity (i.e., learning speed) with the multimedia system's limited memory and the multimedia application's real-time delay constraints. We propose two reinforcement learning algorithms for optimizing the system under different design constraints: the first algorithm solves the cross-layer optimization in a centralized manner and the second solves it in a decentralized manner. We analyze both algorithms in terms of their required computation, memory, and interlayer communication overheads. After noting that the proposed reinforcement learning algorithms learn too slowly, we introduce a complementary accelerated learning algorithm that exploits partial knowledge about the system's dynamics in order to dramatically improve the system's performance. In our experiments, we demonstrate that decentralized learning can perform equally as well as centralized learning, while enabling the layers to act autonomously. Additionally, we show that existing application-independent reinforcement learning algorithms, and existing myopic learning algorithms deployed in multimedia systems, perform significantly worse than our proposed application-aware and foresighted learning methods.