Abstract
Connected and autonomous vehicles, along with the expanding Internet of Vehicles (IoV), are increasingly exposed to complex and evolving cyberattacks. Consequently, Intrusion Detection Systems (IDS) have become a vital component of modern vehicular cybersecurity. Federated Learning (FL) enables multiple vehicles to collaboratively train detection models while keeping their local data private, providing a decentralized alternative to traditional centralized learning. Despite these advantages, FL-based IDS frameworks remain vul- nerable to attacks. To address this vulnerability, we propose an explainable federated intrusion detection framework that enhances both the security and interpretability of IDS in connected vehicles. The framework employs a Deep Neural Network (DNN) within a federated setting and integrates explainability through the Shapley Additive Explanations (SHAP) method. This Explainable Artificial Intelligence (XAI) component identifies the most influential network features contributing to detection decisions and assists in rec- ognizing anomalies arising from malicious or corrupted clients. Experimental validation on the CICEVSE2024 and CICIoV2024 vehicular datasets demonstrates that the proposed system achieves high detection accuracy. Moreover, the XAI module improves transparency and enables analysts to verify and understand the model’s decision-making process. Com- pared with both centralized IDS models and conventional federated approaches without explainability, the proposed system delivers comparable performance, stronger resilience to attacks, and significantly enhanced interpretability. Overall, this work demonstrates
| Original language | English |
|---|---|
| Article number | 4508 |
| Number of pages | 18 |
| Journal | Electronics |
| Volume | 14 |
| Issue number | 22 |
| DOIs | |
| Publication status | Published - 18 Nov 2025 |
Keywords
- explainable AI
- federated learning
- intrusion detection systems
- connected vehicles