Data networks
Control procedures for slotted Aloha systems that achieve stability
SIGCOMM '86 Proceedings of the ACM SIGCOMM conference on Communications architectures & protocols
Network control by bayesian broadcast
IEEE Transactions on Information Theory
Discrete-time controlled Markov processes with average cost criterion: a survey
SIAM Journal on Control and Optimization
Wireless Networks - Special issue transmitter power control
Planning and acting in partially observable stochastic domains
Artificial Intelligence
Adaptive Markov Control Processes
Adaptive Markov Control Processes
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Information capacity and power control for slotted Aloha random-access systems
IEEE Transactions on Information Theory
Optimality of threshold policies for transmission scheduling in correlated fading channels
IEEE Transactions on Communications
Opportunistic transmission for wireless sensor networks under delay constraints
ICCSA'07 Proceedings of the 2007 international conference on Computational science and its applications - Volume Part III
Wireless Personal Communications: An International Journal
Hi-index | 0.01 |
We consider in this study dynamic control policies for Slotted Aloha random access systems. New performance bounds are derived when random access is combined with power control for system optimization, and we establish the existence of optimal control approaches for such systems. We analyze throughput and delay when the number of backlogged users is known, where we can explicitly obtain optimal policies and analyze their corresponding performance using Markov Decision Process (MDP) theory with average cost criterion. For the realistic unknown-backlog case, we establish the existence of optimal backlog-minimizing policies for the same range of arrival rates as the ideal known-backlog case by using the theory of MDPs with Borel state space and unbounded costs. We also propose suboptimal control policies with performance close to the optimal without sacrificing stability. These policies perform substantially better than existing "Certainty Equivalence" controllers.