Model poisoning attack against federated learning with adaptive aggregation

Seyedsina Nabavirazavi, Rahim Taheri*, Mani Ghahremani, Sundararaja Sitharama Iyengar

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapter (peer-reviewed)peer-review

Abstract

Federated Learning (FL) has emerged as a promising decentralized paradigm for training machine learning models across distributed devices, ushering in a new era of collaborative data-driven insights. However, the growing adoption of FL brings forth the need to scrutinize its vulnerabilities and security challenges, particularly concerning adversarial attacks. This book chapter delves into the intricate realm of FL’s susceptibility to adversarial model poisoning attacks and sheds light on the robustness of adaptive federated aggregation methods, including FEDADAGRAD, FEDYOGI, and FEDADAM. Through empirical investigations conducted on diverse image datasets, the chapter provides a meticulous exploration of these state-of-the-art algorithms, unraveling their potential vulnerabilities when subjected to adversarial manipulation. The research unravels the nuanced interplay between adaptive aggregation strategies and adversarial attacks, revealing the strengths and limitations of contemporary security paradigms in federated learning. The findings underscore the critical importance of fortifying FL frameworks with robust defenses to safeguard against adversarial incursions, propelling the field towards more secure, reliable, and resilient distributed machine learning practices. This chapter offers a valuable contribution to the ever-evolving landscape of FL security, enhancing our understanding of the challenges and opportunities that lie ahead.
Original languageEnglish
Title of host publicationAdversarial Multimedia Forensics
EditorsEhsan Nowroozi, Kassem Kallas, Alireza Jolfaei
PublisherSpringer
Chapter1
Pages1-27
Number of pages27
Edition1st
ISBN (Electronic)9783031498039
ISBN (Print)9783031498022
DOIs
Publication statusPublished - 15 Nov 2023

Publication series

NameAdvances in Information Security
PublisherSpringer
Volume104
ISSN (Print)1568-2633
ISSN (Electronic)2512-2193

Keywords

  • federated learning
  • model poisoning attack
  • adaptive aggregation
  • robustness
  • adversarial attack
  • image classification

Fingerprint

Dive into the research topics of 'Model poisoning attack against federated learning with adaptive aggregation'. Together they form a unique fingerprint.

Cite this