We published our report on 2 March 2021. This page was updated in November 2022 to fix a broken link
The Office for Statistics Regulation (OSR) is undertaking a review focused on the process of developing the statistical models designed for awarding 2020 exam results.
Our review aims to identify lessons for public bodies considering the use of statistical models to support decisions. These lessons will be based on our consideration of the extent to which the qualifications’ regulators across the UK developed their models in line with the principles set out in the Code of Practice for Statistics.
It is not within OSR’s role to regulate operational decisions made by government bodies about outcomes for individuals, nor to form a judgement on decisions about government policy. As such we are not reviewing the implications of the models for the results of individual students or evaluate policy decisions on the most appropriate way to award exam grades in the absence of exams.
What have we been doing?
2 March 2021 – We have published our report.
9 February 2021 – We have received comments from the qualification regulators and are implementing changes to ensure the factual accuracy of our report. Alongside this we are refining our recommendations to ensure that they drive improvements in the development of statistical models by public bodies in the future to support public confidence in decisions made with them. We plan to publish our report on 2 March 2021.
25 November 2020 – Following discussions with the Authority’s Regulation Committee, we have decided to publish this review in the New Year. This is to allow time for us to be clear how our review will drive improvements in the development of statistical models in the future and to allow the qualifications regulators sufficient time to provide comments to us on the report.
11 November 2020 – We are holding seminars with organisations who work in the modelling and/or artificial intelligence space. These seminars aim to help us establish how the learning we have identified can be applied by those developing statistical models to support decision making in future. Alongside the seminars, we are writing the report, ready to publish on 30th November.
26 October 2020 – Lead regulator on the review, Emily Carless comments: “We are making good progress on the review and are on track to publish our findings by the end of November.
We are exploring two main questions – ‘To what extent did the approach to developing and communicating the models support public confidence in the outputs?’ and ‘To what extent were the algorithms developed appropriate for their intended purpose?’
We are not looking backwards, or to apportion blame with this review. It is important this review focuses on the lessons learned and the way forward when producing future statistical models.”
13 October 2020 – Director General Ed Humpherson has written a blog, talking about why the exams algorithm story is about more than exams, and how our review will identify lessons for public bodies considering the use of statistical models in the future.
16 September 2020 – To support us with the review we have set up an Expert Oversight Group which is meeting on a regular basis. The purpose of the group is to provide technical oversight to the OSR project team. Members of this group are National Statistician Professor Sir Ian Diamond, Professor at Imperial College London, David Hand, Chair of the UK Statistics Authority, Sir David Norgrove and leading statistician and Chair of the Winton Centre at Cambridge University, Professor Sir David Spiegelhalter.
28 August 2020 – We have further developed our scope for the review, which will focus on considering the extent to which qualifications’ regulators across the UK developed their models in line with the Code of Practice.
18 August 2020 – We have written to Royal Statistical Society confirming our plan to conduct a review of the approach taken to developing statistical models designed for awarding 2020 exam results.
How are we developing the lessons for public bodies?
Guided by the pillars of the Code - Trustworthiness, Quality and Value, we will initially be exploring the following two questions.
This will consider clarity of purpose, governance, transparency, communication, ethics and public acceptability of the algorithms as well as the role of the appeals process.
This will include the model selection and limitations, quality of the data sources, quality assurance of outputs, innovation and collaboration during the model development and validity of claims around the algorithm.
These webpages will be updated as the review progresses.
If you wish to contact us regarding this review please email firstname.lastname@example.org