Enhancing federated learning robustness through randomization and mixture

Seyedsina Nabavirazavi, Rahim Taheri*, Sundararaja Sitharama Iyengar

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


Protecting data privacy is a significant challenge in machine learning (ML), and federated learning (FL) has emerged as a decentralized learning solution to address this issue. However, FL is vulnerable to poisoning attacks, which control and interrupt the learning process to substantially increase the error rate of the system. The aggregation algorithm’s robustness is crucial to prevent such attacks; however previous Byzantine-robust algorithms have exhibited vulnerability to maliciously crafted model updates. We propose three different aggregation functions namely: Switch, Layered-Switch and Weighted FedAvg. Our proposed methods involve switching between aggregation functions. We evaluate the effectiveness of these methods against a model poisoning attack to demonstrate their robustness. To observe the performance of our proposed aggregation functions, we utilized them within a federated learning framework to classify the CIFAR10, CIFAR10 and Fashion-MNIST datasets. The simulation results demonstrate that all of these methods exhibit greater robustness against the employed adversarial attack. Our approach achieves an enhancement in performance during a poisoning attack, surpassing previous Byzantine-robust algorithms by 5%, 15% and 25% on the MNIST, CIFAR10 and CIFAR100 datasets, respectively. Among the proposed aggregation functions, Weighted FedAvg has shown the highest success rate on the datasets utilized in this research.
Original languageEnglish
Pages (from-to)28-43
JournalFuture Generation Computer Systems
Early online date23 Apr 2024
Publication statusEarly online - 23 Apr 2024


  • Federated Learning
  • Poisoning Attack
  • Krum Aggregation Function
  • robustness
  • Randomization
  • Switch aggregation

Cite this