An Efficient Reinforcement Learning Game Framework for UAV-Enabled Wireless Sensor Network Data Collection

Published in JCST, CCF T1, 2022

Recommended citation: Tong Ding, Ning Liu*, Zhongmin Yan, Lei Liu*, Li-zhen Cui. An Efficient Reinforcement Learning Game Framework for UAV-Enabled Wireless Sensor Network Data Collection. J. Comput. Sci. Technol. 37, 1356–1368 (2022). https://doi.org/10.1007/s11390-022-2419-8. https://link.springer.com/article/10.1007/s11390-022-2419-8

With the developing demands of massive-data services, the applications that rely on big geographic data play crucial roles in academic and industrial communities. Unmanned aerial vehicles (UAVs), combining with terrestrial wireless sensor networks (WSN), can provide sustainable solutions for data harvesting. The rising demands for efficient data collection in a larger open area have been posed in the literature, which requires efficient UAV trajectory planning with lower energy consumption methods. Currently, there are amounts of inextricable solutions of UAV planning for a larger open area, and one of the most practical techniques in previous studies is deep reinforcement learning (DRL). However, the overestimated problem in limited-experience DRL quickly throws the UAV path planning process into a locally optimized condition. Moreover, using the central nodes of the sub-WSNs as the sink nodes or navigation points for UAVs to visit may lead to extra collection costs. This paper develops a data-driven DRL-based game framework with two partners to fulfill the above demands. A cluster head processor (CHP) is employed to determine the sink nodes, and a navigation order processor (NOP) is established to plan the path. CHP and NOP receive information from each other and provide optimized solutions after the Nash equilibrium. The numerical results show that the proposed game framework could offer UAVs low-cost data collection trajectories, which can save at least 17.58% of energy consumption compared with the baseline methods.

Download paper here