QCovid® case study: Lessons in commanding public confidence in models

Methods expert, Jo Mulligan gives an insight into the lessons learned from using the QCovid® risk calculator in commanding public confidence in models

I re-joined OSR around 15 months ago in a newly created regulator role as a methods ‘expert’ (I struggle with the use of the word ‘expert’ as how could anyone be an expert in all statistical methods? – answers on a postcard please). Anyway, with my methods hat on, myself and several colleagues have been testing the lessons that came out of our review of the statistical models designed to award grades in 2020. That review looked at the approach taken to developing statistical models to award grades in the absence of exams that were cancelled because of the pandemic. Through this work, OSR established key factors that impacted on public confidence and identified these lessons to be useful for those developing models and algorithms in the future.

Applying our lessons learnt to QCovid®

We wanted to see if the lessons learnt from our review of the grading models in 2020 could be applied in a different context, to a different sort of algorithm, and test whether the framework stood up to scrutiny. We chose another model to carry out the testing, the QCovid® risk calculator, also developed in response to the pandemic.

In 2020, the Chief Medical Officer of England commissioned the development of a predictive risk model for COVID-19. A collaborative approach was taken, involving members from the Department of Health and Social Care (DHSC), NHS Digital, NHS England, the Office for National Statistics, Public Health England, University of Oxford plus researchers from other UK Universities, NERVTAG, Oxford University Innovations, and the Winton Centre for Risk and Evidence Communication. It was a UK wide approach agreed and including academics from Wales, Scotland and Northern Ireland. 

The original QCovid® model that we reviewed calculates an individual’s combined risk of catching COVID-19 and dying from it, allowing for the inclusion of various risk factors. The QCovid® risk prediction model calculates both the absolute risk and the relative risk of catching and dying from COVID-19. The QCovid® model also calculates the risk of catching COVID-19 and being hospitalised but these results were not used in the Population Risk Assessment.

What is an absolute risk?: This is the risk to an individual based on what happened to other people with the same risk factors who caught COVID-19 and died as a result.

What is a relative risk?: This is the risk of COVID-19 to an individual compared to someone of the same age and sex but without the other risk factors.

The academic team, led by the University of Oxford, developed the model using the health records of over eight million people. It identified certain factors, such as age, sex, BMI, ethnicity and existing medical conditions, that affected the risk of being hospitalised or dying from COVID-19. The team then tested the model to check its performance using the anonymised patient data of over two million other people. Having identified these risk factors, NHS Digital applied the model to medical records of NHS patients in England and those identified as being at an increased risk of dying from COVID-19 were added to the Shielded Patient List (SPL).

This approach was ground-breaking as there was no precedent for applying a model to patient records to identify individuals at risk on such a scale. Before the development of QCovid®, the SPL had been based on a nationally defined set of clinical conditions and local clinician additions. People added to the SPL through application of the QCovid® model were prioritised for vaccination and sent a detailed letter by DHSC advising them that they may want to shield.

The QCovid® model was peer-reviewed and externally validated by trusted, statistical bodies such as the ONS and the results and the QCovid® model code were published. 

What we found from reviewing QCovid®

In testing the lessons from the review of the grading models in 2020, we found that some lessons were not as relevant for QCovid®. For example, the lesson about the need for being clear and transparent on how individuals could appeal any decisions that the algorithm might have automatically made was less relevant in this review. This is because, although individuals were added to the SPL through the model, shielding was advisory only, and individuals (or GPs on their behalf) could remove themselves from the list. Finding lessons that were less relevant in a different context is to be expected as every algorithm or model will differ in its development, application, and outcomes.

As part of this review, we did identify one additional lesson. This concerned how often the underlying data should be refreshed to remain valid in the context of the algorithm’s use and appropriateness, especially if the algorithm is used at different points in time. This was not relevant for the review of grading models as they were only intended to be used once. However, in a different situation, such as the pandemic, where new information is being discovered all the time, this was an important lesson.

What do we plan next?

We found that the framework developed for the review of grading models proved to be a useful tool in helping to judge whether the QCovid® model was likely to command public confidence. It provided assurance about the use of the model and stood up well under scrutiny. Additionally, working on this review has helped us to understand more about QCovid® itself and the work behind it. QCovid® provides a great example that models and algorithms can command public confidence when the principles of Trustworthiness, Quality and Value (TQV) are considered and applied. In terms of how we will use these findings going forward, we have updated our algorithm review framework and this example will feed into the wider OSR work on Guidance for Models as it continues to be developed this year. 

I really hope this work will be useful when we come across other algorithms that have been used to produce statistics and also that when we incorporate it into our Guidance for Models that others will benefit more directly too. So, this concludes my first blog in my Methods role at OSR, and in fact, my first blog ever!