“Dreams ruined by an algorithm”. This was the headline on the BBC Northern Ireland website on 13 August. It summarised one of the main stories of the summer in all four nations of the UK. The headline reflected the human and emotional impact of the 2020 exams results: with some students getting lower grades than they were expecting or felt that they deserved.
But the story is about more than using statistical models to award grades. The resulting negative backlash, specifically on the role that algorithms played, threatens to undermine public confidence in statistical models more broadly.
We are concerned about this. That’s why we’ve launched a review of the way the statistical models were built and overseen. It is not our intention to apportion blame or to launch criticisms with the benefit of hindsight. Instead, we want to focus on the future: how to implement models in a way that is consistent with public confidence.
Government statisticians have performed well during the pandemic. They have responded quickly to identify, develop and produce new data and statistics – statistics which are currently an integral part of our lives. The ONS, and its counterparts in Scotland and Northern Ireland, have provided clear analysis of the human tragedy of mortality. And, as OSR’s rapid reviews have shown, statisticians have produced new data on sectors as diverse as the economy, transport, school attendance, and people’s engagement with green space.
Over the last six months, these and other statistics have served the public’s appetite for clear, trustworthy information on the unfolding covid-19 pandemic. There is clear evidence that they are meeting a real need and serving the public good.
Our overarching role at OSR is to ensure that statistics serve the public good, and we do this by setting a Code of Practice that Government must follow. The Code has public confidence in statistics at its heart. However, confidence in statistics risks being undermined by the poor public reception that the exam algorithms received in all four countries of the UK.
From the perspective of statistics serving the public good, the exams story looks worrying. Not only has there been widespread criticism of the statistical models because of the perceived unfairness in the results they created; but there is a risk that future deployment of statistical techniques may be held back by the chilling effect of the poor publicity. That would be a real setback: it would limit statistical innovation and mean that the public sector cannot use new approaches to providing services to the public.
The exams issue has shone a light on how statistical models are used to make decisions. We want to learn from this experience. We want to explain how statistical models can be deployed and overseen. We want to set out some basic expectations for how these models can serve the public good. And we want to show how the principles of the Code of Practice – trustworthiness, quality and value – are highly relevant to situations like this, where complex models are used to support decisions that have impacts on individual human beings.
In short, the review aims to identify lessons for public bodies considering the use of statistical models to support decisions.
Our overall aim is simply stated. The use of statistical models to support decisions is likely to increase in coming years. We want to show how government can learn from this experience – and make sure that these statistical models serve the public good.