Abstract
Educational institutions collect huge data related to student admissions, academic performance, and institutional processes. However, much of this data remains underutilized, limiting its potential to inform equitable and data-driven decision-making. In particular, Polytechnic admission processes often rely on traditional evaluation criteria that may introduce biases, leading to unfair selection procedures. This raises critical concerns regarding transparency, accountability, and fairness in AI-driven decision-making systems within educational institutions. To address these challenges, this study develops a novel Structural Causal Model Ontology (SCMO) framework for knowledge discovery and local explainability in student admission processes.The SCMO framework identified important causal relationships in the features needed in modelling the admission process and was validated using the conditional independence test (CIT) criteria. This framework integrates causal reasoning with explainable artificial intelligence (XAI) techniques to enhance transparency and mitigate biases in admission predictions. A key innovation in this research is the adaptation of FAIR-LIME, a fairness-aware extension of Local Interpretable Model-Agnostic Explanations (LIME), which incorporates causal constraints to ensure unbiased interpretability. The SCMO was further used to identify and constrain input features that contribute to bias in the LIME framework applied to machine learning (ML) black-box predictions. By employing an ablation process guided by the SCMO, the study produced more stable LIME explanations free from fairness bias compared to automated LIME explanations that lacked ablation. Specifically, ablation-guided explanations improved stability by 1% in F1 score over automated LIME, while improving fairness scores from 0.72 to 0.94.
The study employs a deductive research approach, utilizing real-world admission datasets from a Nigerian polytechnic to evaluate the effectiveness of the framework. Machine learning models, including Gaussian Naïve Bayes, Decision Trees, and Logistic Regression, were trained on past admission data, and their predictions are analysed through the SCMO-Fair LIME framework to assess fairness, explainability, and predictive reliability. The research findings indicate that incorporating structural causal modelling into XAI techniques significantly improves fairness-aware decision-making by identifying and mitigating algorithmic biases at both the feature selection iv and interpretability stages. The Fair-LIME framework produced higher F1 scores when compared with the original LIME framework, i.e., 95% vs 94%.
Despite these advancements, a limitation of the approach is that the SCMO remains qualitative and context-specific, meaning that the FAIR-LIME framework is currently validated using a Nigerian Polytechnic data and may require adaption for other educational settings. Future research could explore other explanation frameworks like Shapley on the same dataset to compare performance. Finally, by providing a structured ontological approach to causal fairness and transparency, the SCMO-Fair-LIME framework offers a robust methodology for AI-driven decision support in Polytechnic admissions. The research has significant implications for policymakers, educational institutions, and AI ethics practitioners, ensuring that AI applications in education uphold principles of fairness, accountability, and trustworthiness.
Date of Award | 23 Jun 2025 |
---|---|
Original language | English |
Awarding Institution |
|
Supervisor | Olumuyiwa Matthew (Supervisor), Peter Bednar (Supervisor) & Alexander Gegov (Supervisor) |