TY - JOUR
T1 - DeepGuard: identification and attribution of AI-generated synthetic images
AU - Namani, Mouna Yasmine Namani
AU - Reghioua, Ikram Reghioua
AU - Bendiab, Gueltoum
AU - Labiod, Mohamed Ayman
AU - Shiaeles, Stavros
PY - 2025/2/8
Y1 - 2025/2/8
N2 - Text-to-image (T2I) synthesis, driven by advancements in deep learning and generative models, has seen significant improvements, enabling the creation of highly realistic images from textual descriptions. However, this rapid development brings challenges in distinguishing synthetic images from genuine ones, raising concerns in critical areas such as security, privacy, and digital forensics. To address these concerns and ensure the reliability and authenticity of data, this paper conducts a systematic study on detecting fake images generated by text-to-image synthesis models. Specifically, it evaluates the effectiveness of deep learning methods that leverage ensemble learning for detecting fake images. Additionally, it introduces a multi-classification technique to attribute fake images to their source models, thereby enabling accountability for model misuse. The effectiveness of these methods is assessed through extensive simulations and proof-of-concept experiments. The results reveal that these methods can effectively detect fake images and associate them with their respective generation models, achieving impressive accuracy rates ranging from 98.00% to 99.87% on our custom dataset, “DeepGuardDB”. These findings highlight the potential of the proposed techniques to mitigate synthetic media risks, ensuring a safer digital space with preserved authenticity across various domains, including journalism, legal forensics, and public safety.
AB - Text-to-image (T2I) synthesis, driven by advancements in deep learning and generative models, has seen significant improvements, enabling the creation of highly realistic images from textual descriptions. However, this rapid development brings challenges in distinguishing synthetic images from genuine ones, raising concerns in critical areas such as security, privacy, and digital forensics. To address these concerns and ensure the reliability and authenticity of data, this paper conducts a systematic study on detecting fake images generated by text-to-image synthesis models. Specifically, it evaluates the effectiveness of deep learning methods that leverage ensemble learning for detecting fake images. Additionally, it introduces a multi-classification technique to attribute fake images to their source models, thereby enabling accountability for model misuse. The effectiveness of these methods is assessed through extensive simulations and proof-of-concept experiments. The results reveal that these methods can effectively detect fake images and associate them with their respective generation models, achieving impressive accuracy rates ranging from 98.00% to 99.87% on our custom dataset, “DeepGuardDB”. These findings highlight the potential of the proposed techniques to mitigate synthetic media risks, ensuring a safer digital space with preserved authenticity across various domains, including journalism, legal forensics, and public safety.
KW - Image deepfake
KW - security
KW - digital forensics
KW - generative AI
KW - cyberattack
UR - https://www.mdpi.com/2079-9292/14/4/665
U2 - 10.3390/electronics14040665
DO - 10.3390/electronics14040665
M3 - Article
SN - 2079-9292
VL - 14
SP - 1
EP - 16
JO - Electronics
JF - Electronics
IS - 4
M1 - 665
ER -