Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Photo of a young woman writing on a transparent wipe board and thinking of a solution for her work-related problems

Responsible governance of emerging AI technologies requires us to anticipate their broader societal implications, imagine alternative futures and assess their desirability. This is a difficult task in current policymaking contexts that are informed by evidence-based logic. This fellowship explores the challenges and barriers faced by UK central and local government teams to effectively anticipate, imagine and evaluate the broader societal implications of emerging AI technologies contexts. Through a co-design approach engaging with policymakers, public officers, foresight practitioners and researchers, the project aims to develop conceptual and practical resources to foster policy makers’ 'moral imagination' around future AI technologies. 

The challenge

Currently, institutional foresight functions suffer from what could be called a 'moral deliberation gap'. This is due to the over-reliance on quantitative approaches which are unable to capture 'fuzzy' qualitative societal impacts.  Even when more qualitative foresight methods are used, they rarely prompt policymakers to explore the moral assumptions of their decisions, which results in a disconnect between foresight outputs and deliberative processes. 

This moral deliberation gap hinders responsible AI ecosystems and limits the diversity of voices and perspectives included when exploring emerging social and ethical impacts. This in turn results in a neglect of non-measurable impacts and a poor articulation of moral and societal desirability of AI applications. To address this challenge, this project will build on existing tools, toolkits and frameworks for assessing ethical implications of emerging technologies, to develop conceptual and practical resources that can be used in the governance ecosystem in the UK to include imaginative thinking in responding to emerging AI technologies.

Key project partners

This project is delivered in collaboration with the Ada Lovelace Institute and Nuffield Council on Bioethics (NCOB).

How will we do this?

  • Our approach utilises methods and concepts from the social sciences alongside concepts developed in philosophy/ethics of technology. 
  • We will conduct interviews with relevant stakeholders, particularly those in the AI policy ecosystem and institutionalised foresight context to get a richer understanding of the complexities and nuances within policy making contexts.
  • We will analyse existing tools, frameworks and methodologies for assessing ethical implications of emerging technologies to assess their fitness to the UK anticipatory governance ecosystem.  
  • We will assess how the concept of 'moral imagination' can be understood, applied and operationalised in the context of innovation governance.
  • We will co-create and assess some practical resources with two partners in local and central government.