Ethical funding for trustworthy AI: initial workshop outcomes

Marie Oldfield, Allison Gardner, Adam Leon Smith, Adam Steventon, Ellen Coughlan

Research output: Contribution to conferencePaperpeer-review


A number of ethical AI frameworks [1] have been published to guide developers in producing AI systems that help to mitigate the risks and harms that can occur. Systems that have been developed along ethical guidelines can be considered “Trustworthy AI”. However, despite the prevalence of these guidelines we continue to experience the deployment of AI systems that infringe on equality and human rights, demonstrating significant bias[2]. The Ethical Funding for Trustworthy AI (EFTAI) project was formed as a response to the increasing concerns regarding the development and deployment of Artificial Intelligence (AI) systems that result in bias, discrimination and infringements on human rights [3]. Specifically, we focus on how and why such AI systems have been funded and what controls are in place at this stage.
Original languageEnglish
Publication statusPublished - 2 Jun 2021
EventAISB Symposium 2021 - UK, London, United Kingdom
Duration: 6 Sept 202110 Sept 2021


ConferenceAISB Symposium 2021
Country/TerritoryUnited Kingdom


Dive into the research topics of 'Ethical funding for trustworthy AI: initial workshop outcomes'. Together they form a unique fingerprint.

Cite this