Cooperation Enforcement and Learning for Optimizing Packet Forwarding in Autonomous Wireless Networks

  • Authors:
  • C. Pandana;Zhu Han;K. J. Ray Liu

  • Affiliations:
  • Arraycomm, San Jose, CA;-;-

  • Venue:
  • IEEE Transactions on Wireless Communications
  • Year:
  • 2008

Quantified Score

Hi-index 0.01

Visualization

Abstract

In wireless ad hoc networks, autonomous nodes are reluctant to forward others' packets because of the nodes' limited energy. However, such selfishness and noncooperation deteriorate both the system efficiency and nodes' performances. Moreover, the distributed nodes with only local information may not know the cooperation point, even if they are willing to cooperate. Hence, it is crucial to design a distributed mechanism for enforcing and learning the cooperation among the greedy nodes in packet forwarding. In this paper, we propose a self- learning repeated-game framework to overcome the problem and achieve the design goal. We employ self-transmission efficiency as the utility function of individual autonomous node. The self transmission efficiency is defined as the ratio of the power for self packet transmission over the total power for self packet transmission and packet forwarding. Then, we propose a framework to search for good cooperation points and maintain the cooperation among selfish nodes. The framework has two steps: First, an adaptive repeated game scheme is designed to ensure the cooperation among nodes for the current cooperative packet forwarding probabilities. Second, self-learning algorithms are employed to find the better cooperation probabilities that are feasible and benefit all nodes. We propose three learning schemes for different information structures, namely, learning with perfect observability, learning through flooding, and learning through utility prediction. Starting from noncooperation, the above two steps are employed iteratively, so that better cooperating points can be achieved and maintained in each iteration. From the simulations, the proposed framework is able to enforce cooperation among distributed selfish nodes and the proposed learning schemes achieve 70% to 98% performance efficiency compared to that of the optimal solution.