Abstract
The increasing sophistication of adversarial malware attacks highlights the urgent need for robust detection systems capable of resisting evasion strategies. This paper presents a comprehensive evaluation of vulnerabilities in machine learning-based malware detectors, focusing on adversarial effectiveness across diverse attack paradigms. We systematically assess gradient-based and optimization-driven attacks against hybrid CNN-LSTM classifiers, revealing significant susceptibility under adversarial pressure. Using explainable AI techniques—SHAP and LIME—we uncover critical decision-making vulnerabilities tied to semantically meaningful malware features. Our findings expose structural limitations in current deep learning approaches, demonstrating that hybrid models exhibit unexpected robustness at higher perturbation levels, while standalone CNN and LSTM models remain highly vulnerable. The study provides actionable insights for developing more resilient detection systems
| Original language | English |
|---|---|
| Title of host publication | Proceedings of 5th International Mobile, Intelligent, and Ubiquitous Computing Conference 17/09/25 → 18/09/25 Cairo, Egypt |
| Publisher | Institute of Electrical and Electronics Engineers Inc. |
| Pages | 123-128 |
| Number of pages | 6 |
| ISBN (Electronic) | 9798331539221 |
| ISBN (Print) | 9798331539238 |
| DOIs | |
| Publication status | Published - 21 Oct 2025 |
| Event | 5th International Mobile, Intelligent, and Ubiquitous Computing Conference - Misr International University, Cairo, Egypt Duration: 17 Sept 2025 → 18 Sept 2025 Conference number: 5 https://www.aconf.org/conf_217003.2025_International_Mobile,_Intelligent,_and_Ubiquitous_Computing_Conference_(MIUCC).html |
Conference
| Conference | 5th International Mobile, Intelligent, and Ubiquitous Computing Conference |
|---|---|
| Abbreviated title | MIUCC |
| Country/Territory | Egypt |
| City | Cairo |
| Period | 17/09/25 → 18/09/25 |
| Internet address |
Keywords
- Adversarial malware
- deep learning security
- explainable AI
- evasion attacks