Decision tree ensemble based classification of terrorist attacks using eXplainable Artificial Intelligence

Research output: Chapter in Book/Report/Conference proceedingConference contribution

28 Downloads (Pure)

Abstract

The study proposes using five benchmark machine learning models alongside XGBoost, applied for the first time to an existing case study to predict the success of suicide of terrorist attacks. Utilizing data from the Global Terrorism Database (GTD), the study evaluates model effectiveness to aid decision-making for emergency responders and policymakers. Employing explainable Artificial Intelligence (XAI) models like SHAP ensures transparent decision-making processes. XGBoost performed best for accuracy and performance, while LightGBM excelled in explainability, with SHAP providing global and local insights into their decision-making. The primary goal is to enhance user comprehension and facilitate informed decision-making in critical scenarios, prioritizing transparency, and trustworthiness.
Original languageEnglish
Title of host publicationProceedings of IEEE Intelligent Systems IS’24
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9798350350982
ISBN (Print)9798350350999
DOIs
Publication statusPublished - 9 Oct 2024
Event12th IEEE International Conference on Intelligent Systems - Varna, Bulgaria
Duration: 29 Aug 202431 Aug 2024

Publication series

Name2024 IEEE 12th International Conference on Intelligent Systems (IS)
PublisherIEEE
ISSN (Print)2832-4145
ISSN (Electronic)2767-9802

Conference

Conference12th IEEE International Conference on Intelligent Systems
Country/TerritoryBulgaria
CityVarna
Period29/08/2431/08/24

Keywords

  • Global Terrorism Database (GTD)
  • machine learning
  • Terrorism Prediction
  • Explainable AI (XAI)
  • SHAP

Fingerprint

Dive into the research topics of 'Decision tree ensemble based classification of terrorist attacks using eXplainable Artificial Intelligence'. Together they form a unique fingerprint.

Cite this