Federated Non-Stochastic Multi-Armed Bandit for Channel Sensing in Cognitive Radio Systems
Résumé
This work proposes a novel approach to channel sensing in cognitive radio systems, drawing inspiration from reinforcement learning theory. While the adversarial Multi-Armed Bandit framework is commonly used to manage diverse channel sampling, it struggles to accurately assess resource occupancy due to geographical variations in channel availability across devices. To address this limitation, collaboration among devices is essential and can be effectively achieved through Federated Learning (FL). FL integrated into a Multi-Armed Bandit framework addresses key challenges, including data heterogeneity, the high cost of data centralization, privacy concerns, and biased learning. We enhance the widely-used Exponential-weight algorithm for exploration and exploitation (EXP3) by incorporating federation, allowing learning devices to collectively identify channels with less interference. Our simulation results demonstrate the effectiveness of the federated EXP3 (F-EXP3) algorithm by comparing it with the traditional EXP3 and the federated Upper Confidence Bound (UCB). The experiments reveal that F-EXP3 overcomes the limitations of individual learning, leading to superior channel selection performance.
Fichier principal
Federated Non-Stochastic Multi-Armed Bandit for Channel Sensing in Cognitive Radio Systems.pdf (13.52 Mo)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|