Abstract
Federated Learning (FL) enables collaborative model training across decentralized devices while preserving data privacy. However, FL is highly vulnerable to adversarial attacks, including data and model poisoning, which compromise model integrity and security. This chapter introduces a New Label Flipping Attack (New-LFA), a novel attack strategy that selectively manipulates high-impact training labels to maximize model degradation while evading detection. To assess its impact, we conduct a comparative analysis of existing attack and defense mechanisms, summarizing their effectiveness through detailed evaluations. Through empirical experiments on benchmark IoT datasets (N-BaIoT and UNSW-NB15), we demonstrate that New-LFA significantly reduces model performance compared to conventional attacks. Furthermore, we evaluate the resilience of robust aggregation techniques, including Krum and Trimmed Mean, in mitigating poisoning attacks. Our findings highlight the effectiveness of Trimmed Mean as the most robust defense mechanism and underscore the necessity of adaptive security strategies to enhance FL resilience against evolving threats.
| Original language | English |
|---|---|
| Title of host publication | Adversarial Example Detection and Mitigation Using Machine Learning |
| Editors | Ehsan Nowroozi, Rahim Taheri, Lucas Cordeiro |
| Publisher | Springer Cham |
| Pages | 237-248 |
| Number of pages | 12 |
| Edition | 1st |
| ISBN (Electronic) | 9783031994470 |
| ISBN (Print) | 9783031994463, 9783031994494 |
| DOIs | |
| Publication status | Published - 22 Jan 2026 |
Fingerprint
Dive into the research topics of 'Defense strategies in federated learning against adversarial attacks'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver