Rate adaptation schemes in networks with mobile hosts
MobiCom '98 Proceedings of the 4th annual ACM/IEEE international conference on Mobile computing and networking
A prioritized real-time wireless call degradation framework for optimal call mix selection
Mobile Networks and Applications - Analysis and Design of Multi-Service Wireless Networks
LeZi-update: an information-theoretic framework for personal mobility tracking in PCS networks
Wireless Networks - Selected Papers from Mobicom'99
IEEE/ACM Transactions on Networking (TON)
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Neuro-Dynamic Programming
Multi-criteria Reinforcement Learning
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Call admission control and routing in integrated services networks using neuro-dynamic programming
IEEE Journal on Selected Areas in Communications
IEEE Journal on Selected Areas in Communications
A common framework for rate and distortion based scaling of highly scalable compressed video
IEEE Transactions on Circuits and Systems for Video Technology
Adaptive resource management for cellular-based multimedia wireless networks
Proceedings of the 2006 international conference on Wireless communications and mobile computing
Service adaptability in multimedia wireless networks
IEEE Transactions on Multimedia
Journal of Network and Computer Applications
Hi-index | 0.00 |
The scarcity and large fluctuations of link bandwidth in wireless networks have motivated the development of adaptive multimedia services in mobile communication networks, where it is possible to increase or decrease the bandwidth of individual ongoing flows. This paper studies the issues of quality of service (QoS) provisioning in such systems. In particular, call admission control and bandwidth adaptation are formulated as a constrained Markov decision problem. The rapid growth in the number of states and the difficulty in estimating state transition probabilities in practical systems make it very difficult to employ classical methods to find the optimal policy. We present a novel approach that uses a form of discounted reward reinforcement learning known as Q-learning to solve QoS provisioning for wireless adaptive multimedia. Q-learning does not require the explicit state transition model to solve the Markov decision problem; therefore more general and realistic assumptions can be applied to the underlying system model for this approach than in previous schemes. Moreover, the proposed scheme can efficiently handle the large state space and action set of the wireless adaptive multimedia QoS provisioning problem. Handoff dropping probability and average allocated bandwidth are considered as QoS constraints in our model and can be guaranteed simultaneously. Simulation results demonstrate the effectiveness of the proposed scheme in adaptive multimedia mobile communication networks.