Technical Note: \cal Q-Learning
Machine Learning
Q-Learning in Continuous State and Action Spaces
AI '99 Proceedings of the 12th Australian Joint Conference on Artificial Intelligence: Advanced Topics in Artificial Intelligence
On-Demand Multi Path Distance Vector Routing in Ad Hoc Networks
ICNP '01 Proceedings of the Ninth International Conference on Network Protocols
Mobilized Ad-Hoc Networks: A Reinforcement Learning Approach
ICAC '04 Proceedings of the First International Conference on Autonomic Computing
Predicting link quality using supervised learning in wireless sensor networks
ACM SIGMOBILE Mobile Computing and Communications Review
Review: Survey of multipath routing protocols for mobile ad hoc networks
Journal of Network and Computer Applications
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
A Maximum-Residual Multicast Protocol for Large-Scale Mobile Ad Hoc Networks
IEEE Transactions on Mobile Computing
An enhanced reinforcement routing protocol for inter-vehicular unicast application
EuroIMSA '08 Proceedings of the IASTED International Conference on Internet and Multimedia Systems and Applications
Reinforcement Learning for Resource Allocation in LEO Satellite Networks
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Using feedback in collaborative reinforcement learning to adaptively optimize MANET routing
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
SAVEUS: SAving Victims in Earthquakes through Unified Systems
International Journal of Communication Networks and Distributed Systems
GMCA: a greedy multilevel clustering algorithm for data gathering in wireless sensor networks
International Journal of Communication Networks and Distributed Systems
Hi-index | 0.00 |
A mobile ad hoc network (MANET) is a self-configuring network of mobile devices connected by wireless links. Frequent topology changes and limited bandwidth make communication in MANETs particularly challenging. We present a MANET routing protocol that can efficiently handle network mobility by a way of preemptively switching to a better route before the current route fails. The protocol uses a distributed Q-learning algorithm to infer network status information and takes in to consideration link stability and bandwidth efficiency while selecting a route. We study the performance of this protocol through simulation and demonstrate its advantages over existing protocols.