In our latest guest blog, Joy Aston and Mark Durkee, from Centre for Data Ethics & Innovation (CDEI, part of the newly-formed Department for Science, Innovation & Technology), discuss the Algorithmic Transparency Recording Standard (ATRS), which they have developed with Central Digital and Data Office (CDDO) following commitments in the National Data Strategy, and reinforced by findings from the OSR’s report on public confidence in statistical models published in 2021.

Ensuring that statistics are used responsibly has long been a key part of the remit of statisticians working within government. The approach to doing this has matured over the years, including via the introduction of the Office for Statistics Regulation (OSR), and the government’s statistics profession has a strong track record in delivering trustworthy official statistics.

However, in recent years, statistical models are finding new uses, often described as algorithms, AI or algorithmic tools. To help non-experts understand this work, we’re using ‘algorithmic tool’ as a deliberately broad term that covers different applications of AI and complex algorithms.

Used well, such algorithmic tools can offer significant benefits; saving time, supporting more effective decisions and risk assessments and more. The increased use of complex statistical models in more operational ways has important implications for the role of statisticians. A badly designed, or badly understood, statistical model might not only have implications for broad policy-making, but on specific decisions made about individuals. A growing field of data ethics has emerged to help manage some of these risks, and ensure that such models are developed and used responsibly.

The public has a right to explanation and information about how the government operates and makes decisions about them. This is to ensure people can understand actions taken, appeal decisions, and hold responsible decision-makers to account. 

Perhaps the most high profile example of the need to secure public confidence in algorithms, were the approaches taken by the UK exam regulators in awarding exam grades in 2020. The subsequent OSR report on public confidence in statistical models published in 2021 highlighted transparency as key to ensuring public confidence.

Though public confidence is a core reason for transparency, there are other benefits too:

  • Transparency can support innovation in organisations, helping senior leaders to engage with how their teams are using AI and enabling the sharing of best practice between organisations.
  • Transparency can help to improve engagement with the public, and facilitate greater accountability: enabling citizens to understand or, if necessary, challenge a decision.
  • Transparency tends to make things better; when we are open about things we tend to do them better, for example by paying more attention to broader data ethics issues, or in clearly articulating the limitations of a tool.

The OSR report recommended the creation of a comprehensive directory of guidance for Government bodies that are deploying algorithmic tools. Many other organisations have made similar calls, and commitments were made in both the National Data Strategy (2020) and National AI Strategy (2021). In response, the Centre for Data Ethics & Innovation (CDEI) and Central Digital and Data Office (CDDO) have worked together to develop the Algorithmic Transparency Recording Standard (ATRS)

This establishes a standardised way for public sector organisations to proactively and transparently publish information about how and why they are using algorithmic approaches in decision-making. This information is published on gov.uk to make it easily accessible to the public. It has been endorsed by the UK government Data Standards Authority which recommends the standards, guidance and other resources government departments should follow when working on data projects. 

The design and development of the Standard has been underpinned by extensive collaboration with public sector, industry and academic stakeholders as well as citizen engagement. It is informed by a public engagement study run by the CDEI and BritainThinks and has been piloted with a variety of public sector organisations across the UK, including the Information Commissioner’s Office, the Food Standards Agency, police forces and more. Pilot partners have noted several benefits internally, including how it encourages different parts of an organisation to work together and consider ethical aspects from a range of perspectives.

The use of the term algorithmic tool is partly driven from this public engagement (as a term more readily understood by the general public than alternatives such as statistical model), but also seeks to emphasise the standard’s focus on how tools are embedded in operational processes. The understanding of how the underlying statistical model(s) were created and tested is an important part of this, but so is a clear understanding of how it is being used in practice, for example by front-line operational staff.

We have developed, published, and piloted the Standard, but there is still more work we want to do to support and embed its use. We are currently working on developing an online repository to host records in a searchable and accessible way, along with continuing to encourage its uptake. In parallel, we are actively working directly with teams to complete records, offering support, feedback and guidance.

We would encourage anyone working with, or developing, tools that they feel are in scope of the Standard to get in touch with us at algorithmic.transparency@cdei.gov.uk.