GPSR: greedy perimeter stateless routing for wireless networks
MobiCom '00 Proceedings of the 6th annual international conference on Mobile computing and networking
Hierarchical location service for mobile ad-hoc networks
ACM SIGMOBILE Mobile Computing and Communications Review
Mobilized Ad-Hoc Networks: A Reinforcement Learning Approach
ICAC '04 Proceedings of the First International Conference on Autonomic Computing
Enhancing routing performance for inter-vehicle communication in city environment
Proceedings of the ACM international workshop on Performance monitoring, measurement, and evaluation of heterogeneous wireless and wired networks
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
A survey on position-based routing in mobile ad hoc networks
IEEE Network: The Magazine of Global Internetworking
A dynamic route change mechanism for mobile ad hoc networks
International Journal of Communication Networks and Distributed Systems
Hi-index | 0.01 |
In Vehicular Ad-Hoc Network (VANET), as a result of frequent changes of network topology caused by vehicle's movement, the general purpose ad hoc routing protocols such as AODV and DSR cannot work efficiently. This paper proposed a VANET routing protocol QLAODV which fits for unicast application in high mobility scenario. QLAODV is a distributed reinforcement learning routing protocol, which uses Q-Learning algorithm to infer network state information and uses unicast control packets checking the availability of paths in a real time manner in order to allow Q-Learning to work efficiently in highly dynamic network environment. In this paper, we show the performance analysis of QLAODV by simulation with NS2 in different mobility models, and give the simulation results confirming that QLAODV outperforms original AODV significantly in highly dynamic networks.