Can machine learning model with static features be fooled: an adversarial machine learning approach

Rahim Taheri, Reza Javidan, Mohammad Shojafar*, P. Vinod, Mauro Conti

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


The widespread adoption of smartphones dramatically increases the risk of attacks and the spread of mobile malware, especially on the Android platform. Machine learning-based solutions have been already used as a tool to supersede signature-based anti-malware systems. However, malware authors leverage features from malicious and legitimate samples to estimate statistical difference in-order to create adversarial examples. Hence, to evaluate the vulnerability of machine learning algorithms in malware detection, we propose five different attack scenarios to perturb malicious applications (apps). By doing this, the classification algorithm inappropriately fits the discriminant function on the set of data points, eventually yielding a higher misclassification rate. Further, to distinguish the adversarial examples from benign samples, we propose two defense mechanisms to counter attacks. To validate our attacks and solutions, we test our model on three different benchmark datasets. We also test our methods using various classifier algorithms and compare them with the state-of-the-art data poisoning method using the Jacobian matrix. Promising results show that generated adversarial samples can evade detection with a very high probability. Additionally, evasive variants generated by our attack models when used to harden the developed anti-malware system improves the detection rate up to 50% when using the generative adversarial network (GAN) method.

Original languageEnglish
Pages (from-to)3233-3253
Number of pages21
JournalCluster Computing
Issue number4
Early online date17 Mar 2020
Publication statusPublished - 1 Dec 2020


  • Adversarial machine learning
  • Android malware detection
  • Generative adversarial network
  • Jacobian algorithm
  • Poison attacks


Dive into the research topics of 'Can machine learning model with static features be fooled: an adversarial machine learning approach'. Together they form a unique fingerprint.

Cite this