The pandemic has triggered its own oft-repeated tropes. Like ‘no-one has had enough of experts now’ or variants thereon (see this piece in the Irish Times). Or that while it’s good to be guided by the science, the “science” is not just a single monolithic block of knowledge (here’s an excellent piece by Dominic Abrams).
Here are OSR’s modest suggestions for additional tropes that ought to be repeated widely, based on our perspective as regulators of the statistics system.
We are all armchair epidemiologists now
People looking keenly at the daily graphs and bar charts on the progress of the pandemic. We are looking for patterns, for change, for a flattening of the curve. We are building hypotheses from the data like veteran disease modellers. We have become armchair epidemiologists.
We are seeing the power of statistics to inform, to paint a picture
At OSR we’ve been saying for a long time that statistics are not just for the big inside-the-tent decision-makers. They are also there to inform everyone about the world we live in. As an example of our advocacy of the universal role of statistics, here’s a blog we published last autumn[1]. The pandemic illustrates this better than any presentation from me or my colleagues: this has been a pandemic that has been revealed and understood through data and statistics.
Trustworthiness really matters
Trustworthiness is about the signals that the speaker gives to the hearer: about how the speaker shows that they have the audience’s interests at heart, not their own private interests. The role scientists are playing in communicating about the pandemic illustrates this perfectly: the scientists are standing up for an informed, non-partisan judgement about what the available evidence says. They may not always get it right, but they are clear and honest about their judgements and the areas they are not sure about. This illustrates what I recently said for an article in the New York Times – “Being trustworthy depends not on conveying an aura of infallibility, but on honesty and transparency.”
It’s crucial to be clear what stats do and don’t say
Most of OSR’s interventions during the pandemic have focused on providing clarity on what the statistics do and do not say. For example, over time, DHSC and PHE became better at explaining that the daily deaths figure was incomplete, in that it only included figures from certain settings, mainly hospitals. The ONS’s weekly figures provide a more complete picture, albeit with a time lag. The key point is that this distinction wasn’t made clearly at the start of the pandemic and there was a risk of providing a confusing picture to the public. As we said in our review, “The nature and extent of the uncertainty around the UK estimates of deaths associated with COVID-19 has not so far been made clear. However, we are aware of efforts being made to improve the clarity and transparency of the material”.
Don’t forget data quality. Nor Dark Data.
The first question to ask whenever looking at a piece of data is: “how can I be sure it really fully captures what it claims to cover?”. It’s a simple question, but it raises lots of complex issues around quality – about how data are recorded and collected, and about the inherent uncertainties in the data. But it’s amazing how often asking this simple question can reveal particular weaknesses in data – just read this excellent report from the Welsh Government, describing how they identified weaknesses in the quality of their daily data on covid-19, and how they’ve addressed them.
Asking this question can also expose a broader issue of weaknesses and gaps. The issue of data quality and data gaps has been neatly summarised by David Hand in his recent book, Dark Data – see his presentation on You Tube. The subtitle of Dark Data says it all, and is so relevant to this pandemic: Why What You Don’t Know Matters.
Publish as a default
Let’s end with the area where we’ve made our strongest interventions. Statistics cannot do their job to inform people’s understanding and to be a source of trustworthy insight if they are not publicly available. One of our core principles is that when information is used in public discourse – by Ministers, for example – it should be fully, openly and freely available. By and large Governments in the UK have sought to comply with this (the Scottish Government stand out here for their open approach to publishing management information).
We are proportionate here: we recognise that not every piece of data is appropriate to publish. And Ministers may use a figure briefly in a Parliamentary committee or a media briefing, and given the volume of information they are receiving, it isn’t always possible to limit themselves to published information.
But we do think that the default should be to publish, regardless of whether the information is tagged as official statistics, management information or research. For the public the distinction between these categories is unlikely to be meaningful. We consider that the same principles of transparency, quality and equality of access should be followed for all data used in public discourse
On two occasions, we’ve had to step in to encourage publication, with positive responses from the relevant department. The first concerned DWP ,who undertook to publish data used by Ministers, and then did not do so. We wrote publicly to encourage the publications. And the second concerned the Department of Health in Northern Ireland, who suspended publication of a dashboard. In both cases, our intervention led to commitments to publish more data, in line with the need for equality of access.
Equality of access at the heart of it all
In fact, this principle of equality of access is at the heart of everything.
After all, without published statistics, there can be no armchair epidemiologists.