TY - CHAP
T1 - Model poisoning attack against federated learning with adaptive aggregation
AU - Nabavirazavi, Seyedsina
AU - Taheri, Rahim
AU - Ghahremani, Mani
AU - Iyengar , Sundararaja Sitharama
PY - 2023/11/15
Y1 - 2023/11/15
N2 - Federated Learning (FL) has emerged as a promising decentralized paradigm for training machine learning models across distributed devices, ushering in a new era of collaborative data-driven insights. However, the growing adoption of FL brings forth the need to scrutinize its vulnerabilities and security challenges, particularly concerning adversarial attacks. This book chapter delves into the intricate realm of FL’s susceptibility to adversarial model poisoning attacks and sheds light on the robustness of adaptive federated aggregation methods, including FEDADAGRAD, FEDYOGI, and FEDADAM. Through empirical investigations conducted on diverse image datasets, the chapter provides a meticulous exploration of these state-of-the-art algorithms, unraveling their potential vulnerabilities when subjected to adversarial manipulation. The research unravels the nuanced interplay between adaptive aggregation strategies and adversarial attacks, revealing the strengths and limitations of contemporary security paradigms in federated learning. The findings underscore the critical importance of fortifying FL frameworks with robust defenses to safeguard against adversarial incursions, propelling the field towards more secure, reliable, and resilient distributed machine learning practices. This chapter offers a valuable contribution to the ever-evolving landscape of FL security, enhancing our understanding of the challenges and opportunities that lie ahead.
AB - Federated Learning (FL) has emerged as a promising decentralized paradigm for training machine learning models across distributed devices, ushering in a new era of collaborative data-driven insights. However, the growing adoption of FL brings forth the need to scrutinize its vulnerabilities and security challenges, particularly concerning adversarial attacks. This book chapter delves into the intricate realm of FL’s susceptibility to adversarial model poisoning attacks and sheds light on the robustness of adaptive federated aggregation methods, including FEDADAGRAD, FEDYOGI, and FEDADAM. Through empirical investigations conducted on diverse image datasets, the chapter provides a meticulous exploration of these state-of-the-art algorithms, unraveling their potential vulnerabilities when subjected to adversarial manipulation. The research unravels the nuanced interplay between adaptive aggregation strategies and adversarial attacks, revealing the strengths and limitations of contemporary security paradigms in federated learning. The findings underscore the critical importance of fortifying FL frameworks with robust defenses to safeguard against adversarial incursions, propelling the field towards more secure, reliable, and resilient distributed machine learning practices. This chapter offers a valuable contribution to the ever-evolving landscape of FL security, enhancing our understanding of the challenges and opportunities that lie ahead.
KW - federated learning
KW - model poisoning attack
KW - adaptive aggregation
KW - robustness
KW - adversarial attack
KW - image classification
U2 - 10.1007/978-3-031-49803-9_1
DO - 10.1007/978-3-031-49803-9_1
M3 - Chapter (peer-reviewed)
SN - 9783031498022
T3 - Advances in Information Security
SP - 1
EP - 27
BT - Adversarial Multimedia Forensics
A2 - Nowroozi, Ehsan
A2 - Kallas, Kassem
A2 - Jolfaei, Alireza
PB - Springer
ER -