Reflections on lessons learned from COVID for health and social care data

You may have noticed that the last 18 months or so have been rather unusual. In fact it’s getting difficult to think about what things were like before masks, distancing and the universal smell of alcohol gel.  And there’s another change to which we have become accustomed – the daily parade of statistics, the use of graphs on the news, and the huge presence of scientific and statistical discussion, both in the media and among ordinary people who are not even statisticians!

The scale and ambition of the health data being made available would have been unthinkable just two years ago, as would be the complexity and sophistication of the analyses being conducted. But the Office for Statistics Regulation’s ‘Lessons Learned’ report argues that we should not be complacent: we need to press harder for more trustworthy, better quality, and higher value statistics.

There are a few recommendations that stand out for me. First, Lesson 9 focusses on improved communication. Back in May 2020 I stuck my neck out on the Andrew Marr show and criticised the press briefings as being a form of ‘number theatre’, with lots of big and apparently impressive numbers being thrown around without regard for either accuracy or context. This attracted attention (and 1.7m views on Twitter). But although some dodgy graphs continued to appear, the presentation of statistics improved.  Crucial to communication, however, is Lesson 1 on transparency – it is essential that the statistics underlying policy decisions, which affect us all, are available for scrutiny and are not cherrypicked to avoid those that might rock some political boat. This requires both constant vigilance, and appropriate clout for professional analysts.

Lesson 7 deals with collaboration, reflecting the extraordinary progress that has been made both in collaboration across governments and with academic partners, all of whom have shown themselves (against archetype) to be capable of agile and bold innovations. The Covid Infection Survey, in particular, has demonstrated both the need and the power of sophisticated statistical modelling applied to survey data. Although of course I would say that, wouldn’t I, as I happen to be chair of their advisory board, which has enabled me to see first-hand what a proper engagement between the ONS and universities can achieve.

Finally, Lesson 3 addresses the idea that data about policy interventions should not just enable us to know what is happening – essentially ‘process’ measures of activity – but help us to evaluate the impact of that policy. This is challenging; Test and Trace has come in for particular criticism in this regard. For statisticians, it is natural to think that data can help us assess the effect of actions, with the randomised clinical trial as a ‘gold-standard’, but with an increasing range of other techniques available for non-experimental data. Again there is a need to get this up the agenda by empowering professionals.

An over-arching theme is the need for the whole statistical system to be truly independent of political influence from any direction. While this is enshrined in legislation, a continued effort will need to be made to make sure that work with data lives up to the standards expressed in the Code of Practice for Statistics, in terms of trustworthiness, quality and value. The pandemic has shown how much can be achieved with the right will and appropriate resources, and OSR’s ‘Lessons Learned’ point the way forward.

 

David Spiegelhalter is a Non-Executive Director of the UK Statistics Authority, which oversees the work of the Office for Statistical Regulation.

Why I like the Code

Sir David Spiegelhalter, former President of the Royal Statistical Society, blogs on the anniversary of the refreshed Code of Practice for Statistics.

I must be honest: a Code of Practice for Statistics is not something I would usually get very excited about.  So why do I trumpet the virtues of the new Code in the talks I give?

There are two main reasons. First, the rumours of a ‘post-truth’ society mean that it is timely to talk about trust in numbers, science and experts, and whenever I hear the word ‘trust’ I turn to Baroness Onora O’Neill, author of the 2002 Reith Lectures (a brilliant listen or read) and presenter of one of the best TedX talks I have seen – summarising everything that’s about important about trust, quoting Kant, telling jokes, all in nine minutes.

O’Neill makes the fundamental point that when organisations say they want to be trusted, they are missing the whole point. Trust is something that is offered to us, we have to earn it, and we earn it by demonstrating trustworthiness. It is such a simple idea, and yet when I introduce it in talks, people pull out their phones and start photographing the slide.  And so of course I think it is completely appropriate that trustworthiness forms the first pillar in the Code.

My second reason for cheerleading the Code is its emphasis on communication and transparency.  And again I return to Onora O’Neill, who has closely examined the idea of transparency within the context of open data. Under the term ‘intelligent transparency’, she identifies four important features – information should be

  • accessible – people should be able to get at it
  • comprehensible – people should be able to understand it
  • useable – it should suit their needs, and
  • assessable – interested parties should, if necessary, be able to examine the workings and assess its quality.

When I put this list up, audiences reach for their phones again (nobody seems to take pictures when I put my own words of wisdom up, but then again I have not spent a lifetime as a philosopher). I use this list repeatedly when considering our own advice for communicating statistics, and so I advertise the importance the Code places on showing the pedigree of statistical claims.

I believe a vital aspect of transparency about statistics is the open acknowledgement of uncertainty, and anyone seeing me recently will have heard my harangue about the importance of clear communication of margins of error around unemployment or migration figures, as well as acknowledging deeper, unquantified uncertainties due to, say, lack of reliability of data sources.  So I am delighted the Code emphasises the importance of communicating uncertainty, supported by the excellent, and recently updated, GSS document on Communicating Quality, Uncertainty and Change.

You may possibly have noticed that statisticians can have bit of an image problem, although I find their tendency toward pedantry rather endearing.  I therefore think it is a fine achievement to produce a Code of Practice that is not only full of good sense, but that someone might actually want to read.