Submitted:
01 August 2025
Posted:
04 August 2025
You are already at the latest version
Abstract
Keywords:
1. Introduction
- This paper proposes a novel FL framework with adversarial optimisation to strengthen the security of edge computing networks. By incorporating a classifier model and an adversary model, the proposed algorithm simultaneously improves the robustness of the FL model against adversarial attacks. This ensures more secure and private FL training across edge devices.
- To train the proposed framework, adversary model considers the Fast Gradient Sign Method (FGSM) for generation of stronger perturbations based on the classifier model’s responses in an iterative manner. This guarantees the improvement of model resilience in privacy sensitive edge computing networks.
- For performance evaluation, extensive level simulations have been performed considering the - Network Intrusion Detection Dataset (NIDD) [10], which validates the adaptability of the proposed algorithm to edge computing networks including -enabled IoT. Comprehensive experimental analysis reveals valuable insights of the proposed FL with adversarial optimisation algorithm in terms of accuracy, scalability, computational efficiency, and time efficiency.
2. Related Work
3. System Model and Problem Formulation
4. Proposed Federated Learning with Adversarial Optimisation Algorithm
4.1. Dataset Description and Pre-Processing
4.2. Training of Federated Learning with Adversarial Optimisation Model
4.3. Testing of the Trained Federated Learning with Adversarial Optimisation Model
| Algorithm 1: Federated Learning with Adversarial Optimisation |
|
Input: T: global rounds, M: number of clients, E: local epochs, B: batch size
Input: : learning rates, : perturbation budget
Result: Final global classifier model and adversary model
Initialise global classifier model and adversary model
for to T do
![]()
end
Return: Final global classifier parameters and adversary parameters
|
5. Performance Evaluation
5.1. Implementation of Federated Learning with Adversarial Optimisation Model
6. Conclusion
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
| -NIDD | - Network Intrusion Detection Dataset |
| FGSM | Fast Gradient Sign Method |
| FL | Federated Learning |
| FLOPS | Floating-point Operations Per Second |
| GPU | Graphics Processing Unit |
| IID | Independent and Identically Distributed |
| IIoT | Industrial Internet of Things |
| PGD | Projected Gradient Descent |
| C&W | Carlini & Wagner |
References
- Hassan, N.; Yau, K.L.A.; Wu, C. Edge computing in 5G: A review. IEEE Access 2019, 7, 127276–127289. [Google Scholar]
- Lee, J.; Solat, F.; Kim, T.Y.; Poor, H.V. Federated learning-empowered mobile network management for 5G and beyond networks: From access to core. IEEE Communications Surveys & Tutorials 2024.
- Nowroozi, E.; Haider, I.; Taheri, R.; Conti, M. Federated learning under attack: Exposing vulnerabilities through data poisoning attacks in computer networks. IEEE Transactions on Network and Service Management 2025. [Google Scholar]
- Han, G.; Ma, W.; Zhang, Y.; Liu, Y.; Liu, S. BSFL: A blockchain-oriented secure federated learning scheme for 5G. Journal of Information Security and Applications 2025, p. 103983.
- Feng, Y.; Guo, Y.; Hou, Y.; Wu, Y.; Lao, M.; Yu, T.; Liu, G. A survey of security threats in federated learning. Complex & Intelligent Systems 2025, 11, 1–26. [Google Scholar]
- Rao, B.; Zhang, J.; Wu, D.; Zhu, C.; Sun, X.; Chen, B. Privacy inference attack and defense in centralized and federated learning: A comprehensive survey. IEEE Transactions on Artificial Intelligence 2024. [Google Scholar]
- Tahanian, E.; Amouei, M.; Fateh, H.; Rezvani, M. A Game-theoretic Approach for Robust Federated Learning. International Journal of Engineering, Transactions A: Basics 2021, 34, 832–842. [Google Scholar]
- Guo, Y.; Qin, Z.; Tao, X.; Dobre, O.A. Federated Generative-Adversarial-Network-Enabled Channel Estimation. Intelligent Computing 2024, 3, 0066. [Google Scholar]
- Grierson, S.; Thomson, C.; Papadopoulos, P.; Buchanan, B. Min-max training: Adversarially robust learning models for network intrusion detection systems. In Proceedings of the 2021 14th International Conference on Security of Information and Networks (SIN). IEEE, 2021, Vol. 1, pp. 1–8.
- Samarakoon, S.; Siriwardhana, Y.; Porambage, P.; Liyanage, M.; Chang, S.Y.; Kim, J.; Kim, J.; Ylianttila, M. 5G-NIDD: A Comprehensive Network Intrusion Detection Dataset Generated over 5G Wireless Network, 2022. Dataset. [CrossRef]
- Bonawitz, K.; Eichner, H.; Grieskamp, W.; Huba, D.; Ingerman, A.; Ivanov, V.; Kiddon, C.; Konečnỳ, J.; Mazzocchi, S.; McMahan, B.; et al. Towards federated learning at scale: System design. Proceedings of machine learning and systems 2019, 1, 374–388. [Google Scholar]
- McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the Artificial intelligence and statistics. PMLR; 2017; pp. 1273–1282. [Google Scholar]
- Bagdasaryan, E.; Veit, A.; Hua, Y.; Estrin, D.; Shmatikov, V. How to backdoor federated learning. In Proceedings of the International conference on artificial intelligence and statistics. PMLR; 2020; pp. 2938–2948. [Google Scholar]
- Lyu, L.; Yu, H.; Yang, Q. Threats to federated learning: A survey. arXiv, 2020; arXiv:2003.02133. [Google Scholar]
- Abadi, M.; Chu, A.; Goodfellow, I.; McMahan, H.B.; Mironov, I.; Talwar, K.; Zhang, L. Deep learning with differential privacy. In Proceedings of the Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, 2016, pp. 308–318.
- Bonawitz, K.; Ivanov, V.; Kreuter, B.; Marcedone, A.; McMahan, H.B.; Patel, S.; Ramage, D.; Segal, A.; Seth, K. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 2017, pp. 1175–1191.
- Aono, Y.; Hayashi, T.; Wang, L.; Moriai, S.; et al. Privacy-preserving deep learning via additively homomorphic encryption. IEEE transactions on information forensics and security 2017, 13, 1333–1345. [Google Scholar]
- Nasr, M.; Shokri, R.; Houmansadr, A. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In Proceedings of the 2019 IEEE symposium on security and privacy (SP). IEEE, 2019, pp. 739–753.
- Zhang, J.; Li, B.; Chen, C.; Lyu, L.; Wu, S.; Ding, S.; Wu, C. Delving into the adversarial robustness of federated learning. In Proceedings of the Proceedings of the AAAI conference on artificial intelligence, 2023, Vol. 37, pp. 11245–11253.
- Nguyen, T.D.; Nguyen, T.; Le Nguyen, P.; Pham, H.H.; Doan, K.D.; Wong, K.S. Backdoor attacks and defenses in federated learning: Survey, challenges and future research directions. Engineering Applications of Artificial Intelligence 2024, 127, 107166. [Google Scholar]
- McCarthy, A.; Ghadafi, E.; Andriotis, P.; Legg, P. Defending against adversarial machine learning attacks using hierarchical learning: A case study on network traffic attack classification. Journal of Information Security and Applications 2023, 72, 103398. [Google Scholar] [CrossRef]
- McCarthy, A.; Ghadafi, E.; Andriotis, P.; Legg, P. Functionality-Preserving Adversarial Machine Learning for Robust Classification in Cybersecurity and Intrusion Detection Domains: A Survey. Journal of Cybersecurity and Privacy 2022, 2, 154–190. [Google Scholar] [CrossRef]
- White, J.; Legg, P. Evaluating Data Distribution Strategies in Federated Learning: A Trade-Off Analysis Between Privacy and Performance for IoT Security. In Proceedings of the AI Applications in Cyber Security and Communication Networks; Hewage, C.; Nawaf, L.; Kesswani, N., Eds., Singapore; 2024; pp. 17–37. [Google Scholar]
- White, J.; Legg, P., Federated Learning: Data Privacy and Cyber Security in Edge-Based Machine Learning. In Data Protection in a Post-Pandemic Society: Laws, Regulations, Best Practices and Recent Solutions; Hewage, C.; Rahulamathavan, Y.; Ratnayake, D., Eds.; Springer International Publishing: Cham, 2023; pp. 169–193. [CrossRef]








| Clients | Strategy | Clean test data | Adversarial test data | ||||
|---|---|---|---|---|---|---|---|
| Precision | Recall | F1 | Precision | Recall | F1 | ||
| 5 | FL | 98% | 98% | 98% | 62% | 60% | 57% |
| Proposed | 96% | 96% | 96% | 93% | 93% | 93% | |
| 10 | FL | 98% | 98% | 98% | 67% | 61% | 63% |
| Proposed | 97% | 97% | 97% | 97% | 97% | 97% | |
| 20 | FL | 98% | 98% | 98% | 60% | 56% | 56% |
| Proposed | 97% | 97% | 97% | 98% | 98% | 98% | |
| 30 | FL | 98% | 98% | 98% | 66% | 65% | 64% |
| Proposed | 98% | 98% | 98% | 99% | 99% | 99% | |
| 40 | FL | 98% | 98% | 98% | 72% | 69% | 70% |
| Proposed | 98% | 98% | 98% | 99% | 99% | 99% | |
| Perturbation | Strategy | Precision | Recall | F1 |
|---|---|---|---|---|
| = 0.01 | FL | 93% | 93% | 93% |
| Proposed | 94% | 94% | 94% | |
| = 0.05 | FL | 78% | 77% | 78% |
| Proposed | 97% | 97% | 97% | |
| = 0.1 | FL | 72% | 69% | 70% |
| Proposed | 99% | 99% | 99% | |
| = 0.2 | FL | 54% | 51% | 52% |
| Proposed | 80% | 81% | 80% | |
| = 0.3 | FL | 48% | 45% | 46% |
| Proposed | 61% | 57% | 54% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

