TY - GEN
T1 - Trustworthy and reliable AI for heart disease diagnosis: advancing ethical and explainable healthcare decision-making
AU - Gogi, Giovanah
AU - Gurung, Santosh Kumar
AU - Gegov, Alexander
AU - Arabikhan, Farzad
AU - Ichtev, Alexandar
PY - 2025/11/14
Y1 - 2025/11/14
N2 - The integration of artificial intelligence (AI) in healthcare decision-making has revolutionised the diagnosis and treatment of many diseases. However, challenges such as model interpretability, data quality, algorithmic bias, and ethical considerations remain a barrier. This paper presents a multi-algorithm approach for heart disease diagnosis that prioritises accuracy, explainability, and ethical AI principles. It also aligns with Explainable Artificial Intelligence (XAI) principles by highlighting ante-hoc transparency through careful feature selection and a tailored CNN model design for heart disease diagnosis. By leveraging interpretable AI techniques and addressing key challenges, this paper demonstrates how trustworthy and reliable AI systems can transform healthcare. Additionally, it explores the potential of post-hoc explainability techniques, such as SHAP and LIME, to clarify the model decisions and build trust among the healthcare professionals. This work bridges the gap between AI and the clinical practice.
AB - The integration of artificial intelligence (AI) in healthcare decision-making has revolutionised the diagnosis and treatment of many diseases. However, challenges such as model interpretability, data quality, algorithmic bias, and ethical considerations remain a barrier. This paper presents a multi-algorithm approach for heart disease diagnosis that prioritises accuracy, explainability, and ethical AI principles. It also aligns with Explainable Artificial Intelligence (XAI) principles by highlighting ante-hoc transparency through careful feature selection and a tailored CNN model design for heart disease diagnosis. By leveraging interpretable AI techniques and addressing key challenges, this paper demonstrates how trustworthy and reliable AI systems can transform healthcare. Additionally, it explores the potential of post-hoc explainability techniques, such as SHAP and LIME, to clarify the model decisions and build trust among the healthcare professionals. This work bridges the gap between AI and the clinical practice.
KW - artificial intelligence
KW - explainable artificial intelligence
KW - machine learning
KW - convolutional neural networks
KW - cardiovascular disease
KW - heart disease
UR - https://2025.ijcnn.org/authors/important-dates
U2 - 10.1109/IJCNN64981.2025.11229043
DO - 10.1109/IJCNN64981.2025.11229043
M3 - Conference contribution
SN - 9798331510435
T3 - IEEE IJCNN Proceedings
BT - 2025 International Joint Conference on Neural Networks (IJCNN)
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2025 International Joint Conference on Neural Networks
Y2 - 30 June 2025 through 5 July 2025
ER -