Abstract
AI has transformed the field of terrorism prediction, allowing law enforcement agencies to identify potential threats much more quickly and accurately. This paper proposes a first-time application of a neural network to predict the “success” of a terrorist attack. The neural network attains an accuracy of 91.66% and an F1 score of 0.954. This accuracy and F1 score are higher than those achieved with alternative benchmark models. However, using AI for predictions in high- stakes decisions also has limitations, including possible biases and ethical concerns. Therefore, the explainable AI (XAI) tool LIME is used to provide more insights into the algorithm's inner workings.
Original language | English |
---|---|
Title of host publication | Proceedings of IEEE CAI 2023: Conference on Artificial Intelligence |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Number of pages | 2 |
ISBN (Electronic) | 9798350339840 |
ISBN (Print) | 9798350339857 |
DOIs | |
Publication status | Published - 2 Aug 2023 |
Event | IEEE CAI2023: Conference on Artificial Intelligence - Santa Clara, United States Duration: 5 Jun 2023 → 6 Jun 2023 https://cai.ieee.org/2023/ |
Conference
Conference | IEEE CAI2023: Conference on Artificial Intelligence |
---|---|
Country/Territory | United States |
City | Santa Clara |
Period | 5/06/23 → 6/06/23 |
Internet address |
Keywords
- explainable AI
- terrorism prediction
- Global Terrorism Database (GTD)
- LIME
- neural networks