The pandemic has triggered its own oft-repeated tropes. Like āno-one has had enough of experts nowā or variants thereon (see this piece in the Irish Times). Or that while itās good to be guided by the science, the āscienceā is not just a single monolithic block of knowledge (hereās an excellent piece by Dominic Abrams).
Here are OSRās modest suggestions for additional tropes that ought to be repeated widely, based on our perspective as regulators of the statistics system.
We are all armchair epidemiologists now
People looking keenly at the daily graphs and bar charts on the progress of the pandemic. We are looking for patterns, for change, for a flattening of the curve. We are building hypotheses from the data like veteran disease modellers. We have become armchair epidemiologists.
We are seeing the power of statistics to inform, to paint a picture
At OSR weāve been saying for a long time that statistics are not just for the big inside-the-tent decision-makers. They are also there to inform everyone about the world we live in. As an example of our advocacy of the universal role of statistics, hereās a blog we published last autumn[1]. The pandemic illustrates this better than any presentation from me or my colleagues: this has been a pandemic that has been revealed and understood through data and statistics.
Trustworthiness really matters
Trustworthiness is about the signals that the speaker gives to the hearer: about how the speaker shows that they have the audienceās interests at heart, not their own private interests. The role scientists are playing in communicating about the pandemic illustrates this perfectly: the scientists are standing up for an informed, non-partisan judgement about what the available evidence says. They may not always get it right, but they are clear and honest about their judgements and the areas they are not sure about. This illustrates what I recently said for an article in the New York Times ā āBeing trustworthy depends not on conveying an aura of infallibility, but on honesty and transparency.ā
It’s crucial to be clear what stats do and donāt say
Most of OSRās interventions during the pandemic have focused on providing clarity on what the statistics do and do not say. For example, over time, DHSC and PHE became better at explaining that the daily deaths figure was incomplete, in that it only included figures from certain settings, mainly hospitals. The ONSās weekly figures provide a more complete picture, albeit with a time lag. The key point is that this distinction wasnāt made clearly at the start of the pandemic and there was a risk of providing a confusing picture to the public. As we said in our review, āThe nature and extent of the uncertainty around the UK estimates of deaths associated with COVID-19 has not so far been made clear. However, we are aware of efforts being made to improve the clarity and transparency of the materialā.
Donāt forget data quality. Nor Dark Data.
The first question to ask whenever looking at a piece of data is: āhow can I be sure it really fully captures what it claims to cover?ā. Itās a simple question, but it raises lots of complex issues around quality ā about how data are recorded and collected, and about the inherent uncertainties in the data. But itās amazing how often asking this simple question can reveal particular weaknesses in data ā just read this excellent report from the Welsh Government, describing how they identified weaknesses in the quality of their daily data on covid-19, and how theyāve addressed them.
Asking this question can also expose a broader issue of weaknesses and gaps.Ā The issue of data quality and data gaps has been neatly summarised by David Hand in his recent book, Dark Data ā see his presentation on You Tube.Ā The subtitle of Dark Data says it all, and is so relevant to this pandemic: Why What You Donāt Know Matters.
Publish as a default
Letās end with the area where weāve made our strongest interventions. Statistics cannot do their job to inform peopleās understanding and to be a source of trustworthy insight if they are not publicly available. One of our core principles is that when information is used in public discourse ā by Ministers, for example ā it should be fully, openly and freely available. By and large Governments in the UK have sought to comply with this (the Scottish Government stand out here for their open approach to publishing management information).
We are proportionate here: we recognise that not every piece of data is appropriate to publish. And Ministers may use a figure briefly in a Parliamentary committee or a media briefing, and given the volume of information they are receiving, it isnāt always possible to limit themselves to published information.
But we do think that the default should be to publish, regardless of whether the information is tagged as official statistics, management information or research. For the public the distinction between these categories is unlikely to be meaningful. We consider that the same principles of transparency, quality and equality of access should be followed for all data used in public discourse
On two occasions, weāve had to step in to encourage publication, with positive responses from the relevant department. The first concerned DWP ,who undertook to publish data used by Ministers, and then did not do so. We wrote publicly to encourage the publications. And the second concerned the Department of Health in Northern Ireland, who suspended publication of a dashboard. In both cases, our intervention led to commitments to publish more data, in line with the need for equality of access.
Equality of access at the heart of it all
In fact, this principle of equality of access is at the heart of everything.
After all, without published statistics, there can be no armchair epidemiologists.