Abstract
AI-enabled capabilities in war provide new ethical challenges, even for nonlethal support tools such as the battlefield casualty triage drones that are the focus of this paper. We address an important and underexplored problem: how to embed ethical considerations into military AI systems that are designed to save lives rather than take them. The paper examines the ‘ATRACT’ project, which is developing an AI-powered drone as a trustworthy robotic autonomous system (RAS) to help frontline medics prioritise casualties in the critical post-trauma minutes that shape survival chances. As a position paper written while development is still underway, it includes the bespoke ethics framework created in the course of the project to date and offers real-time insights for other defence and security projects seeking to operationalise abstract AI ethics principles into concrete design and assurance guidance. We examine and draw upon approaches to operationalizing abstract principles in adjacent domains, to show how high-level principles can be translated into implementable requirements for technical robustness, ethical compliance, safety, and legal conformity, actively shaping system architecture, data, and human–machine interaction. We argue that trustworthiness is a socio-technical property that emerges from governance, documentation, and oversight rather than code alone, and that ethical assurance for triage drones must be designed in from inception and verified through ongoing testing, audit, and transparent evidence of due diligence.
| Original language | English |
|---|---|
| Journal | AI and Ethics |
| Publication status | Accepted for publication - 17 Dec 2025 |
Keywords
- ethical challenges
- robotic autonomous system
- post-trauma care
- ethical principles
- AI-powered autonomous system
- socio-technical property
- UKRI
- EPSRC
- EP/X028631/1