Accessibility navigation


Reinforcement learning approach for SF allocation in LoRa network

Hong, S., Yao, F., Zhang, F., Ding, Y. and Yang, S.-H. (2023) Reinforcement learning approach for SF allocation in LoRa network. IEEE Internet of Things, 10 (20). pp. 18259-18272. ISSN 2327-4662

Full text not archived in this repository.

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

To link to this item DOI: 10.1109/JIOT.2023.3279429

Abstract/Summary

LoRa technology is widely used to build wireless networks in various Internet of Things (IoT) applications. As the increased popularity of IoT, LoRa also gains tremendous attention in recent years. In most of the LoRa networks, Aloha protocol is employed to send packets which may easily lead to collisions. Thanks to the orthogonality of spreading factor (SF) in modulate technique, a potential solution is obtained for this collision issue in LoRa network. In this study, a reinforcement learning (RL) based method called LR-RL is proposed to assign SF properly to alleviate collisions. The idea of LR-RL is mainly derived from the mathematical model of SF-channel traffic equilibrium, which indicates that SF with higher data rate must undertake more packet loads. Based on the system model, several similar methods such as LR-opt-pro, LR-greedy and LR-RL are put forward successively. The LR-RL algorithm owns the best performance in terms of packet collision rate (PCR). In addition, we carry out simulations to evaluate the performance of LR-RL in both one-hop and LoRa-Mesh networks. Results show that LR-RL has lower PCR than other SF allocation methods. Moreover, practical experiments are also conducted to verify the performance. All the experiments exhibit that LR-RL is a desirable method to reduce packet collisions in LoRa network.

Item Type:Article
Refereed:Yes
Divisions:Science > School of Mathematical, Physical and Computational Sciences > Department of Computer Science
ID Code:112663
Publisher:IEEE

University Staff: Request a correction | Centaur Editors: Update this record

Page navigation