TY - GEN
T1 - Impact of aggregation function randomization against model poisoning in federated learning
AU - Nabavirazavi, Seyedsina
AU - Taheri, Rahim
AU - Shojafar, Mohammad
AU - Iyengar, Sundararaja Sitharama
PY - 2024/5/29
Y1 - 2024/5/29
N2 - Federated learning has gained significant attention as a privacy-preserving approach for training machine learning models across decentralized devices. However, this distributed learning paradigm is susceptible to adversarial attacks, particularly model poisoning attacks, where adversaries inject malicious model updates to compromise the integrity of the global model. In this paper, we investigate the impact of randomness on model poisoning attacks in federated networks, where the server employs two aggregation rules, Krum and Trimmed Mean, randomly in each federated round. We present three distinct adversaries: one targeting Krum throughout the entire learning process, another targeting Trimmed Mean entirely, and a third adversary employing a randomized strategy between Krum and Trimmed Mean for each round. Our objective is to evaluate their performance in reducing the overall accuracy of the federated network. We propose novel techniques to craft poisoned models and explore the efficacy of these attacks by exploiting the aggregation rules. We evaluated our proposed methods on Fashion-MNIST dataset. The experiments reveal the robustness of the federated network against the proposed adversarial scenarios, contributing to a better understanding of the vulnerabilities and defenses in federated learning systems.
AB - Federated learning has gained significant attention as a privacy-preserving approach for training machine learning models across decentralized devices. However, this distributed learning paradigm is susceptible to adversarial attacks, particularly model poisoning attacks, where adversaries inject malicious model updates to compromise the integrity of the global model. In this paper, we investigate the impact of randomness on model poisoning attacks in federated networks, where the server employs two aggregation rules, Krum and Trimmed Mean, randomly in each federated round. We present three distinct adversaries: one targeting Krum throughout the entire learning process, another targeting Trimmed Mean entirely, and a third adversary employing a randomized strategy between Krum and Trimmed Mean for each round. Our objective is to evaluate their performance in reducing the overall accuracy of the federated network. We propose novel techniques to craft poisoned models and explore the efficacy of these attacks by exploiting the aggregation rules. We evaluated our proposed methods on Fashion-MNIST dataset. The experiments reveal the robustness of the federated network against the proposed adversarial scenarios, contributing to a better understanding of the vulnerabilities and defenses in federated learning systems.
KW - Federated Learning
KW - Model Poisoning Attack
KW - Krum Aggregation Function
KW - Robustness
KW - Randomization
KW - Trimmed Mean aggregation
U2 - 10.1109/TrustCom60117.2023.00043
DO - 10.1109/TrustCom60117.2023.00043
M3 - Conference contribution
SN - 9798350382006
T3 - TrustCom Proceedings Series
SP - 165
EP - 172
BT - 2023 IEEE 22nd International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 22nd IEEE International Conference on Trust, Security and Privacy in Computing and Communications, TrustCom 2023
Y2 - 1 November 2023 through 3 November 2023
ER -