Defense strategies in federated learning against adversarial attacks

Hadiseh Rezaei*, Rahim Taheri, Ehsan Nowroozi

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapter (peer-reviewed)peer-review

Abstract

Federated Learning (FL) enables collaborative model training across decentralized devices while preserving data privacy. However, FL is highly vulnerable to adversarial attacks, including data and model poisoning, which compromise model integrity and security. This chapter introduces a New Label Flipping Attack (New-LFA), a novel attack strategy that selectively manipulates high-impact training labels to maximize model degradation while evading detection. To assess its impact, we conduct a comparative analysis of existing attack and defense mechanisms, summarizing their effectiveness through detailed evaluations. Through empirical experiments on benchmark IoT datasets (N-BaIoT and UNSW-NB15), we demonstrate that New-LFA significantly reduces model performance compared to conventional attacks. Furthermore, we evaluate the resilience of robust aggregation techniques, including Krum and Trimmed Mean, in mitigating poisoning attacks. Our findings highlight the effectiveness of Trimmed Mean as the most robust defense mechanism and underscore the necessity of adaptive security strategies to enhance FL resilience against evolving threats.
Original languageEnglish
Title of host publicationAdversarial Example Detection and Mitigation Using Machine Learning
EditorsEhsan Nowroozi, Rahim Taheri, Lucas Cordeiro
PublisherSpringer Cham
Pages237-248
Number of pages12
Edition1st
ISBN (Electronic)9783031994470
ISBN (Print)9783031994463, 9783031994494
DOIs
Publication statusPublished - 22 Jan 2026

Fingerprint

Dive into the research topics of 'Defense strategies in federated learning against adversarial attacks'. Together they form a unique fingerprint.

Cite this