Neural network based prediction of terrorist attacks using explainable artificial intelligence

Research output: Chapter in Book/Report/Conference proceedingConference contribution

64 Downloads (Pure)

Abstract

AI has transformed the field of terrorism prediction, allowing law enforcement agencies to identify potential threats much more quickly and accurately. This paper proposes a first-time application of a neural network to predict the “success” of a terrorist attack. The neural network attains an accuracy of 91.66% and an F1 score of 0.954. This accuracy and F1 score are higher than those achieved with alternative benchmark models. However, using AI for predictions in high- stakes decisions also has limitations, including possible biases and ethical concerns. Therefore, the explainable AI (XAI) tool LIME is used to provide more insights into the algorithm's inner workings.
Original languageEnglish
Title of host publicationProceedings of IEEE CAI 2023: Conference on Artificial Intelligence
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages2
ISBN (Electronic)9798350339840
ISBN (Print)9798350339857
DOIs
Publication statusPublished - 2 Aug 2023
EventIEEE CAI2023: Conference on Artificial Intelligence - Santa Clara, United States
Duration: 5 Jun 20236 Jun 2023
https://cai.ieee.org/2023/

Conference

ConferenceIEEE CAI2023: Conference on Artificial Intelligence
Country/TerritoryUnited States
CitySanta Clara
Period5/06/236/06/23
Internet address

Keywords

  • explainable AI
  • terrorism prediction
  • Global Terrorism Database (GTD)
  • LIME
  • neural networks

Fingerprint

Dive into the research topics of 'Neural network based prediction of terrorist attacks using explainable artificial intelligence'. Together they form a unique fingerprint.

Cite this