Transparency: bringing the inside out

In our latest blog Director General for Regulation, Ed Humpherson, discusses the divergence between internal positivity and external scepticism about analysis in Government, and how transparency is key to benefitting the public good…

Seen from within Government, these are positive times for analysis. There is an analysis function, headed by Sir Ian Diamond, which continues to support great, high-profile analytical work. There are strong professions, including economists, statisticians, operational researchers and social researchers, each with strong methods and clear professional standards. There is an Evaluation Task Force, which is doing great things to raise the profile of evaluation of policy. And data and analysis are emphasised by Ministers and civil service leaders like never before – exemplified by the 2023 One Big Thing training event focused on use of data and analysis in Government.

Yet the perspective from outside Government is quite different. The Public Administration and Constitutional Affairs Select Committee has been undertaking an inquiry into Transforming the UK’s Statistical Evidence Base. Several witnesses from outside Government who’ve given evidence, and some of the written evidence that has been provided, highlights concerns about the availability of analysis and how it’s used. In particular, witnesses questioned whether it’s clear what evidence sources inform policy decisions.

What explains this divergence between internal positivity and external scepticism?

In my view, and as I said in my own evidence before the Committee, it all comes down to transparency. By this I mean: the way in which analysis, undertaken by officials to inform Ministers, is made available to external users.

This is highly relevant to the Committee’s inquiry. A key question within the inquiry is the way in which external users can access analysis undertaken within Government.

These questions are very relevant to us in OSR. We have developed the principle of Intelligent Transparency. You can read more here, but in essence, Intelligent Transparency is about ensuring that, when Government makes statements using numbers to explain a policy and its implementation, it should make the underlying analysis available for all to see.

As I explained to the Committee, we make interventions when we see this principle not being upheld – for example, here and here. When we step in departments always respond positively, and the analysts work with policy and communications colleagues to make the evidence available.

My basic proposition to the Committee was that the more Government can comply with this principle, the more the gap between the internal insight (there’s lots of good analysis) and the external perception (the analysis isn’t used or made available), will close. This commitment to transparency should be accompanied by openness – the  willingness to answer questions raised by users; and a willingness to acknowledge the inherent limitations and uncertainties within a dataset.

In terms of what we do at OSR, I wouldn’t see any point, or value, in us going upstream to consider the quality of all the analysis that circulates within Government.

Our role is about public accessibility and public confidence – not about an internal quality assurance mechanism for economics, operational research, social research and other types of analysis undertaken in Government. We are not auditors of specific numbers (ie a particular figure from within a statistical series) – something we have to reiterate from time to time when a specific number becomes the focus of political debate. Nor do we have the resources nor remit to do that. But we DO have both the capacity and framework to be able to support the appropriate, transparent release and communication of quantitative information.

This is the heartland of our work on statistics, and it’s completely applicable to, say, economic analysis of policy impacts, or evaluations of the impact of Government policy. There are good arrangements for the quality of economic analyses through the Government Economic Service (GES), and the quality of evaluations through the Evaluation Task Force (ETF); and similarly for the other disciplines that make up the Analysis Function. The ETF is a new kid on this particular block, and it is a great innovation, a new force for driving up the standards and openness of Government evaluations.

Where we add value is not in duplicating the GES, or ETF, or similar professional support structure within Government. Indeed, we already work in partnership with these sources of support and professional standards. Our expertise is in how this quantitative information is communicated in a way that can command public confidence.

In short, then, it really does come down to a question of transparency. As I said to the Committee, it’s like a garden in the early morning. Some of it is in the sunlight already, and some of it still in shade. Gradually, we are seeing more and more of the lawn come into the sunlight – as the reach of transparency grows to the benefit of the public.

The success and potential evolution of the 5 Safes model of data access

In our latest blog Ed Humpherson, Director General for Regulation discusses the 5 Safes model as a key feature to support data sharing and linkage…


In OSR’s data linkage report , we highlighted the key features of the data landscape that support data sharing and linkage. The 5 Safes model is one of those. Yet we also recommended that the 5 Safes model is reviewed. In this blog, I want to focus on one aspect of the model and set out the case for a subtle but important change.

The 5 Safes model is an approach to data use that has been adopted widely across the UK research community, and has also been used internationally. It is well-known and well-supported and has had a significant impact on data governance. It is, in short, a huge success story. (And for a short history, and really interesting analysis, see this journal article by Felix Ritchie and Elizabeth Green).

The 5 Safes are:

  • Safe data: data is treated to protect any confidentiality concerns.
  • Safe projects: research projects are approved by data owners for the public good.
  • Safe people: researchers are trained and authorised to use data safely.
  • Safe settings: a SecureLab environment prevents unauthorised use.
  • Safe outputs: screened and approved outputs that are non-disclosive.

Any project that aims to use public sector administrative data for research purposes should be considered against the 5 Safes. The 5 Safes therefore is used to set a criteria-based framework for providing assurance about the appropriateness of a particular project.

OSR’s recommendations relevant to the 5 Safes:

In July 2023, OSR published our report on data sharing and linkage in government. We had a range of findings. I won’t spell them out here, but in short, we found a good deal of progress across Government, but some remaining barriers to data sharing and linkage. We argued that these barriers must be addressed to ensure that the good progress is maintained.

We made two recommendations relevant to the 5 Safes:

  • Recommendation 3: The Five Safes Framework Since the Five Safes Framework was developed twenty years ago, new technologies to share and link data have been introduced and data linkage of increased complexity is occurring. As the Five Safes Framework is so widely used across data access platforms, we recommend that the UK Statistics Authority review the framework to consider whether there are any elements or supporting material that could be usefully updated.
  • Recommendation 10: Broader use cases for data To support re-use of data where appropriate, those creating data sharing agreements should consider whether restricting data access to a specific use case is essential or whether researchers could be allowed to explore other beneficial use cases, aiming to broaden the use case were possible.

We made the recommendation about reviewing the framework because a range of stakeholders mentioned to us the potential for updating the 5 Safes model, in the light of an environment of ever-increasing data availability and ever-more powerful data processing and analysis tools.

And we made the recommendation about broader use cases because this was raised with us as an area of potential improvement.

The use of 5 Safes in research projects

What brings the two recommendations together is the 5 Safes idea of “safe projects”. This aspect of the model requires research projects to be approved by data owners (essentially, the organisations that collect and process the data) for the public good.

For many research activities, this project focus is absolutely ideal. It can identify how a project serves the public good, what benefits it is aiming to bring, and any risks it may entail. It will require the researcher to set out the variables in the data they wish to explore, and the relationships between those variables they want to test.

For some types of research, however, the strictures of focusing on a specific project can be limiting. For example, for a researcher who wants to establish a link between wealth and some aspects of health may not know in advance which of the variables in a wealth dataset, and which of the variables in a health data set, they wish to examine. Using the “safe project” framing, they might have to set out specific variables, only to discover that they are not the most relevant for their research. And then they might have to go back to the drawing board, seeking “safe project” approval for a different set of variables.

Our tentative suggestion is that a small change in focus might resolve these problems. If the approval processes focused on safe programmes, this would allow approval of a broad area of research – health and wealth data sets – without the painstaking need to renew applications for different variables within those datasets.

What I have set out here is, of course, very high level. It would need quite a lot of refinement.

Other expert views on the 5 Safes

Recognising this, I shared the idea with several people who’ve spent longer than me thinking about these issues. The points they made included:

  • Be careful about placing too much emphasis on the semantic difference between programmes and projects. What is a programme for one organisation or research group might be a project for another. More important is to establish clearly that broader research questions can be “safe”. Indeed, in the pandemic, projects on Covid analysis and on Local Spaces did go ahead with a broader-based question at their heart.
  • This approach could be enhanced if Data Owners and Controllers are proactive in setting out what they consider to be safe and unsafe uses of data. For example, they could publish any hard-line restrictions (“we won’t approve programmes unless they have the following criteria…”). Setting out hard lines might also help Data Owners and Controllers think about programmes of research rather than individual projects by focusing their attention on broader topics rather than specifics.
  • In addition, broadening the Safe Project criterion is not the only way to make it easier for researchers to develop their projects. Better meta data (which describe the characteristics of the data) and synthetic data (which create replicas of the data set) can also help researchers clarify their research focus without needing to go through the approvals process. There have already been some innovations in this area – for example, the Secure Research Service developed an exploratory route that allows researchers to access data before putting in a full research proposal – although it’s not clear to me how widely this option is taken up.
  • Another expert pointed out the importance of organisations that hold data being clear about what’s available. The MoJ Data First programme provides a good example of what can be achieved in this space – if you go to the Ministry of Justice: Data First – GOV.UK (www.gov.uk) you can see the data available in the Datasets section, including detailed information about what is in the data.
  • Professor Felix Ritchie of the University of West England, who has written extensively about data governance and the 5 safes, highlighted for me that he sees increasing “well-intentioned, but poorly thought-through” pressure to prescribe research as tightly as possible. His work for the ESRC Future Data Services project sees a shift away from micro-managed projects as highly beneficial – after all, under the current model “the time risk to a researcher of needing a project variation strongly incentivises them to maximise the data request”.

More broadly, the senior leaders who are driving the ONS’s Integrated Data Service pointed out that the 5 Safes should not be seen as separate minimum standards. To a large extent, they should be seen as a set of controls that work in combination – the image of a graphic equaliser to balance the sound quality in a sound system is often given. Any shift to Safe Programmes should be seen in this context – as part of a comprehensive approach to data governance.

Let us know your thoughts

In short, there seems to be scope for exploring this idea further. Indeed, when I floated this idea as part of my keynote speech at the ADR UK conference in November, I got – well, not quite a rapturous reception, but at least some positive feedback.

And even if it’s a small change, of just one word, it is nevertheless a significant step to amend such a well-known and effective framework. So I offer up this suggestion as a starter for debate, as opposed to a concrete proposal for consultation.

Let me know what you think by contacting DG.Regulation@Statistics.gov.uk.

Remember how far you’ve come, not just how far you want to go

In our latest blog, Director General Ed Humpherson reflects on what OSR has achieved and our plans for the future…

At OSR, we are constantly striving to improve. We recently published our Business Plan for 23/24 which outlines our ambitions to support the transformation of statistics and support improved communication of statistics, and to build partnerships with other organisations that focus on the public value of statistics and data.

But as the saying goes, it’s important to acknowledge how far we’ve come, as well as where we want to be. So I’d like to begin by setting out what we do.

The Office for Statistics Regulation is the regulatory arm of the UK Statistics Authority. Our role is to promote and safeguard the production and publication of official statistics in the UK.

There are three elements to our role:

  • how statistics are produced – conducting reviews on the statistics produced by government bodies, including awarding the National Statistics designation;
  • how statistics are used – looking at how statistics are used in public debate; and
  • how statistics are valued – promoting the value of statistics.

We do all this underpinned by a coherent philosophy. For statistics to serve the public good, it is essential to recognise that they are not just numbers. They must be produced by trustworthy organisations. They must be high quality. And they must be valuable to users.

Our maturity model

 

Levels of regulatory maturity

When looking at where we are heading we use a regulatory maturity model.

At the lower end the regulator is operating at a static level, checking the compliance of individual statistics. This would be worthwhile, but wouldn’t be at all responsive to emerging user concerns.

At the next level of maturity, the regulatory becomes more dynamic, responding to emerging issues from users. But it would be piecemeal.

To mature beyond this, and reach the next level, the regulator should mature to become systemic – thinking about how to foster and support a system of statistics that is responsive to users.

The highest level of maturity model goes beyond the statistical system and recognises that for statistics to serve the public good requires a whole culture of respect for evidence, data and analysis.

Where we are now

How mature are we, assessed against this model? We are certainly delivering our core regulatory programme – which shows we are meeting the basic level of maturity – doing a wide range of assessments and other compliance reports during the year.

We are also responsive to emerging user concerns – for example, about ONS’s excess deaths statistics; or the exam algorithms in 2020; or about the population estimates, a set of concerns that first arose around the city of Coventry.

But this is something we do only partially. In my view there is a way to go to be better at anticipating these sorts of user concerns, and being more skilled at doing deep dives into specific issues that are raising questions.

We are also increasingly systemic – addressing the ability of the system to meet user needs more widely; for example through our state of the statistics system report and through our campaign on intelligent transparency. And some of this gets into the wider space of a public culture of data and evidence use, for example our work on statistical literacy. We really should develop this further: it’s hugely important.

What people tell us

We continually ask for feedback, and as part of the recent UKSA mid-term strategy review, users completed a survey including questions about OSR. Stakeholders told us that:

– we should continue to do deep dives, but only if they’re done properly. The recent Sturgis review of our work (link) shows where we can improve in this regard.

– we should continue to challenge poor practice.

– we should increase our visibility, and champion effective communication – things we need to more of.

– we should build partnerships with other organisations.

These all point to us needing to focus at the upper end of the maturity range – to be systemic and outwards focused.

OSR: Work in progress

So what does this all mean in terms of our development as a regulator?

In short, OSR has come a long way. But we are not at the top level of maturity yet. There is a lot we need to improve on – and that’s the intention of our business plan.

We’re keen to hear your views about our work and priorities. Please get in contact with us at regulation@statistics.gov.uk to let us know your thoughts, or if you would like a wider discussion with us.

What 2023 means for OSR

In our latest blog, Director General Ed Humpherson reflects upon the past year and sets out OSR’s priorities for the coming year. We are inviting feedback on these in order to develop our final 2023/24 Business Plan. If you have views on what the key issues in statistics are, please email regulation@statistics.gov.uk 

As the days shorten and the end of the year looms, in OSR we start turning our attention to the next financial year, starting in April 2023: to what we will focus on, what we want to achieve, what our ambitions are.  

This is a reflective and deliberative process: we don’t finalise our business plan for the 2023-24 year until April itself. And it’s also a consultative process: we want to hear from stakeholders about where you think we should focus our attention. 

How we develop our priorities

There are a range of inputs into our thinking. We start from our five year strategic plan, which sets out broadly where we want the statistical system to be by 2025, and how we will help the system get there. We form a judgement as to how close the system is to achieving this vision, and what we should focus on to help support it. 

We also draw on our annual state of the statistical system report (most recently published in July) on what it is telling us about what positive things in statistics we want to nurture and what challenges we want to address. And we also take stock of what we’ve achieved over the last 12 months: whether, in effect, we’ve successfully delivered last year’s priorities. 

But there are two further aspects to our business plan that are crucial. First, we can’t do everything. We are a small organisation, and necessarily we have to make prioritisation choices about where we can most effectively support the public good of statistics. And second, we can’t plan for everything. We have to respond to those areas that are salient in public debate, where statistics are helping or hindering public understanding. And we can’t always predict very well what those issues might be. To take an obvious example, our planning for 2020-21, taking place around this time three years ago, did not anticipate that we’d spend most of 2020-21 looking at statistics and data related to the pandemic. 

To help us make these choices, and to be better at anticipating what might be just over the horizon, we would like the input, advice and creativity of our stakeholders: of people who care about statistics and data, and who want to help us be as effective as we can be. 

Our 23/24 priorities

Today I am pleased to share with you our draft priorities for 2023/24. These are deliberately high level. We have not chosen our menu, selected our ingredients, and already got the cake ready for the oven. We want to leave lots of room for people outside OSR to make suggestions and raise questions. 

Our plan for next year has four high level priorities, all focused on different aspects of supporting change and transformation: 

  • Support and challenge producers to innovate, collaborate and build resilience 
  • Champion the effective communication of statistics to support society’s key information needs 
  • Continue to widen our reach beyond official statistics and official statistics producers 
  • Increase our capability as a regulator 

The keen observers among you might note that these are an evolution of last year’s priorities, rather than a wholesale change. We have had a good year, for sure; but as always, we will always strive to do more.  

Please get in contact with us at regulation@statistics.gov.uk to let us know your thoughts and questions about these priorities, or if you would like a wider discussion with us.

Weights and measures: how consultations relate to OSR’s role

In our latest blog, Director General Ed Humpherson responds to concerns raised with OSR regarding the recent consultation by Department for Business, Energy & Industrial Strategy on the Choice on units of measurements: marking and sales.

This blog was amended on 4 October 2022 to provide further clarity about question design

Since the start of the Government’s consultation on weights and measures, a lot of people have raised concerns about it with us at OSR. At the time of writing, we have received over 150 emails and many others have tagged us on Twitter regarding the recent consultation by the Department for Business, Energy & Industrial Strategy on the Choice on units of measurements: marking and sales that closed on 26 August 2022.

Before considering the specific question that has been raised with us, let’s first set out some background to how consultations relate to our role.

Consultations, done well, are a vital tool for government in developing policy. They can provide specific and qualitative insight to complement evidence from other sources like official statistics. They also help Government to understand different perspectives, foresee possible consequences and gather expert advice on implementation.

Our remit focuses on statistics. We set the standards for how Government collects and uses statistics. Consultations are relevant to our work in two senses. First, consultations can often draw on statistics, for example to illustrate the scale of an issue a policy intends to address. And consultations can also create statistical evidence – for example, the number of people responding to a particular question. In these senses, consultations are relevant to our work as OSR.

Turning to this particular consultation, the aim was to identify how the Government can give more choice to businesses and consumers over the units of measurement they use for trade, while ensuring that measurement information remains accurate.

However, when the consultation was launched, many stakeholders raised concerns surrounding the consultation questions and in particular question 3a. The question was as follows:

If you had a choice, would you want to purchase items (i) in imperial units? or (ii) in imperial units alongside a metric equivalent.

There was no option to select an outcome without imperial units at all. People felt that this was a potentially biased way of collecting information on public views on changes to measurement.

Given the concerns raised with us, we approached BEIS as the Department conducting the consultation. We wanted to understand the reasons for this question. They advised us that UK law currently requires metric units to be used for all trade purposes, with only limited exceptions. The purpose of the consultation was to identify how they can give greater choice to businesses and consumers over the units of measurement they use to buy and sell products. BEIS did also say that respondents had multiple opportunities to give a range of views through the consultation, and that all responses would be carefully considered.

This explanation was helpful. But it still doesn’t make this a good question design because it doesn’t offer a complete range of responses. For example including a ‘none of the above’ option would have allowed respondents the opportunity to express a preference for a unit of measurement other than imperial or imperial with metric. In the absence of such an option, the design in effect makes this an overly leading question.

So, what happens next? We would be surprised to see the evidence drawn from this specific question being used in any kind of formal quantitative way. If the results of this poorly designed question were presented in statistical terms (x% of people said…etc), then this would represent the generation of a statistical claim from a consultation. And in our view it would be potentially quite misleading as a representation of levels of support for any proposed policy change.

How OSR secures change

Why do organisations do the things they do? Strip away all the language of business plans and objectives and strategies, and what it often boils down to is wanting to achieve some kind of positive impact.

It’s important to remember that as we launch our latest business plan for 2022/23. In this blog, rather than highlight specific outputs and priorities, I want to talk more generally about how OSR, as a regulator, secures positive change.

By change, we mean regulatory work or interventions that ensure or enhance statistics serving the public good. There are basically two ways in which our work leads to statistics serving the public good. Our work can:

  • secure a positive change in the way statistics are produced and/or presented; and/or
  • make a direct contribution to public confidence and understanding.

OSR clearly does secure impact. In response to our reports, producers of statistics make changes and improvements to their statistics and other data. Statistics producers also use the presence of OSR in internal debates as a way of arguing for (or against) changes – so that OSR casts a protective shadow around analytical work. OSR can also secure changes to how Ministers and others present data. And OSR also achieves impact through getting departments to publish previously unavailable data. In all these ways, then, OSR secures impact, in the sense of statistics serving the public good to a greater extent.

In terms of formal statutory powers, the main lever is the statutory power to confer the National Statistics designation. This in effect is a way of signalling regulatory judgement. The regulatory role is to assess, i.e. to review and form a judgement and on the basis of the judgement, that National Statistics designation is awarded. We have recently been reviewing the National Statistics designation itself.

A further power in the Statistics and Registration Service Act is the power to report our opinion. Under section 8 of the Act, we are expected to monitor the production and publication of official statistics and report any concerns about the quality of any official statistics, good practice in relation to any official statistics, or the comprehensiveness of any official statistics.

These statutory powers do not create influence and drive change by themselves. We need to be effective in how we wield them. We have to supplement them with a powerful vision, good judgement, and effective communication.

The power of ideas

The most significant source of influence and impact is the power of the ideas that underpin the Code of Practice for Statistics. The Code is built on the idea that statistics should command public confidence. It is not enough for them to be good numbers: collected well, appropriately calculated. They must have attributes of trustworthiness, quality and value.

The power of these ideas comes from two sources. First, they are coherent, and in the Code of Practice, are broken down into a series of increasingly granular components – so the ideas are easy for producers to engage with and implement. Second, they have enormous normative power – in other words, trustworthiness, quality and value represent norms that both statisticians and, senior staff want to be seen to adhere to, and wider users want to see upheld.

These powerful, compelling ideas represent, then, something that people want to buy into and participate in. A huge amount of OSR’s impact happens when OSR is not even directly involved – by the day-to-day work of statisticians seeking to live up to these ideas and the vision of public good that they embody.

Judgements

OSR’s work begins with the ideas embodied in the Code, which we advocate energetically, including through our crucial policy and standards function. The core work for OSR actually consists of making judgements about trustworthiness, quality and value, in multiple ways:

  • our assessments of individual sets of statistics, where we review statistics, form judgements, and present those judgements and associated requirements to producers and then publicly – either through in-depth assessment reports, or by quicker-turnaround reviews of compliance;
  • our systemic reviews, which address issues which cut across broader groups of statistics, and which often focus on how of we statistics provide public value, including highlighting gaps in meeting user needs;
  • our casework, where we make judgements about the role statistics play in public debate – whether there are issues with how they are used, or how they have been produced, which impact on public debate; and
  • through our broader influencing work, including our policy work, and our research and insight work streams.

These judgements are crucial. Our ability to move fluidly using different combinations of our regulatory tools is important to securing impact. It allows us to follow up where the most material change is required and extend our pressure – and support – for change.

We are able to make judgements primarily through the capability of OSR’s people. Their capability is strong, and we depend on their insight, analysis, judgement and ability to manage a range of external relationships.

Communication and reach

It is not enough for us to make good judgements. We need to make sure that any actions are implemented – in effect, that our judgements radiate out into the world and lead to change.

There are three main channels for achieving this reach:

  • Relationships with producers: our relations with producers are crucial. Heads of Profession and the lead statisticians on individual outputs are the key people for making improvements; their buy-in is crucial.
  • Voice and visibility: having a public voice magnifies our impact. It ensures that policymakers are aware of what we do and understand that our interventions can generate media impact.
  • Wider partnerships: while our direct relationships with producers, and our public voice, can also create sufficient leverage for change, we also draw on wider partnerships. For example, credible external bodies like the RSS and Full Fact can endorse and promote our messages – so that producers face a coalition of actors, including OSR, that are pushing for change.

And we put a lot of emphasis on the views, experiences and perspectives of users of statistics. Almost all our work involves engaging with users, finding out what they think, and seeking to ensure producers focus on their needs.

In that spirit, we’d be very keen to get reactions on our own business plan – from all types of users of statistics, and also from statistics producers.

Conclusion 

Business plans should not simply be a list of tasks. It is also important to be clear on how an organisation delivers, how individual projects and priorities help achieve a positive impact. In OSR’s case, achieving this impact involves the power of ideas, good judgement and effective reach.

With this clarity around impact, our business plan (and work programme) comes to life: more than just a set of projects, it’s a statement of ambition, a statement of change.

But the business plan is also not set in stone. We are flexible and willing to adapt to emerging issues. So if there are other areas where we should focus, or other ways we can make a positive difference, we’d really welcome your feedback.

Acting against the misuse of statistics is an international challenge

Acting against the misuse of statistics is an international challenge

That was the message of the event on 14 March hosted by the UK Statistics Authority on the prevention of misuse of statistics, which formed part of a wider campaign to celebrate the 30th anniversary of the UN fundamental principles.

My fellow panellists, Dominik Rozcrut and Steven Vale, and I discussed a range of topics, from addressing statistical literacy to regulation best practice, from memorable examples of misuse, to the cultural differences that affect public trust internationally. Although we all had different experiences and approaches, it was clear that there was a common passion for the truth and statistics that serve the public good.

The event had all the merits of on-line meetings that we’ve all become familiar with: lots of people able to join from a wide range of locations and lots of opportunities for people to engage using live chat functions.

Perhaps some people find it less intimidating to type a question into an app than to raise their hand in a crowded room, because there were lots of interesting questions asked and it was clear that the issue of preventing misuse of statistics generated a lot of interest and passion from the audience as well as the panellists

But the event also brought with it a new kind of frustration to me as a speaker: there were too many questions to answer in the time available and I felt bad that we couldn’t answer all the questions that people typed in.

So, in an attempt to rectify that, I’ve decided to use this blog to address the questions that were directly for me that I didn’t answer in real time, and those which touched on the key themes that came across during the Q&A.


“Who are the best allies in preventing misuse or building statistical literacy outside of stats offices? Are there any surprising allies?”

There are obvious allies for us, like Full Fact and the Royal Statistical Society.

I also like to give a shout out to the work of Sense about Science. Their work highlights that there is a huge amount of interest in evidence, data and statistics – and that a simple model of experts versus “the public” is far too simplistic.

There are a huge range of people who engage with evidence: teachers, community groups, people who represent patients, and a lot of others. These people, who want to find out the best evidence for their community, are fantastic allies.

And I’d also pick out a surprising ally: politicians. In our experience, politicians almost always are motivated to get it right, and not to misuse statistics, and they understand we are making the interventions we are making. So perhaps they are the ally that would most surprise people who look at our work.

“How important is statistical literacy among the media and general public in helping prevent the misuse of statistics?”

I think that having a sort of critical thinking skill is important. People should feel confident in the statistics that are published, but also feel confident that they know where to find the answers to any questions they have about them.

But equally, we need statistical producers to be better in how they communicate things like uncertainty, in a way that is meaningful for the public.

So rather than putting the responsibility of understanding solely on the user, and just talking about statistical literacy, let’s also talk about producers’ understanding – or literacy if you will – about public communication.

“You have mentioned that sometimes the stats get misinterpreted because of the way they are presented – can you share some examples?”

My point here was that misinterpretation is a consequence of what producers of statistics do. One example we’ve seen frequently during the pandemic concerns data on the impact of vaccines. It’s been the case that sometimes people have picked out individual numbers produced by public health bodies and highlighted them to argue their case about vaccines. Producers need to be alive to this risk and be more willing to caveat or re-present data to avoid this kind of misinterpretation.

“What are your views on framing statistics for example 5% mortality rate vs 95% survival rate? Both are correct but could be interpreted very differently.”

I find it impossible to answer this question without context, sorry! I definitely wouldn’t say that, as an absolute rule, one is right and the other is wrong. It depends on the question the statistician is seeking to inform. I can’t be more specific than that in this instance.

However, to avoid possible misinterpretation, we always recommend that producers use simple presentation, with clear communication about what the numbers do and do not say.

“How do we balance freedom of expression with the need to prevent the abuse and misuse of statistics?”

We don’t ban or prohibit people from using statistics, so in that sense there’s no barrier to freedom of expression. But we do want to protect the appropriate interpretation of statistics – so our interventions are always focused on clarifying what the statistics do and don’t say, and asking others to recognise and respect this. It’s certainly not about constraining anyone’s ability to express themselves.

“What’s the most damaging example of misuse of statistics that you’ve come across in your career?”

Here I don’t want to give a single example but give a type of misuse which really frustrates us. It’s when single figures are used as a piece of number theatre, but the underlying dataset from which the single figure is drawn are not available, so it’s not possible for the public to get to understand what sits behind the apparently impressive number. It happens a lot, and we are running a campaign, which we call Intelligent Transparency, to address it.

“Can you give us some more insight into how you steer clear of politics, media frenzies, and personalities?”

We always seek to make our intervention about clarifying the statistics, not about the arguments or policy debates that the statistics relate to. So we step in and say, “this is what the statistics actually say” and then we step out. And we don’t tour the news studios trying to get a big name for ourselves. It’s not our job to get media attention. We want the attention to be on what the statistics actually say.


I hope these answers are helpful, and add some context to the work we do to challenge the misuse of statistics. I also hope everyone reading this is going to follow the next series of events on the UN Fundamental Principles of Official Statistics.

The next round of events is moving on from preventing misuse, to focusing on the importance of using appropriate sources for statistics. Find out more about them on the UNECE website.

Why I love evaluation

In our latest blog, Director General Ed Humpherson looks at the importance of government evaluation in light of the Evaluation Task Force ‘Policy that Works’ conference.

There are lots of good reasons to love evaluation. It provides evidence of what works; it supports good policy; it builds the skills and reputation of analysts; it helps scrutiny. 

But I love evaluation for another reason too. Evaluation, done well, is fundamental to changing the way government in the UK works. 

This week the Evaluation Task Force is running one of the biggest ever conferences on government evaluation. So it’s a great time to celebrate evaluation, and my reasons for loving it.  

In a recent speech, Bronwen Maddox, the Institute for Government’s Director, set out a compelling case for why government needs to change. She highlighted the challenges of rotation of officials, moving on for career advancement so they don’t build grounded expertise in their subject matter. She talked of a lack of accountability. And she said these things combined to create an air of “unreality” to the way Government approached key challenges.  

In some ways this critique is exaggerated. There are lots of examples of officials with grounded expertise, taking responsibility for their decisions and implementation, and understanding the realities of the policy problems they are addressing. But there are enough cases where the critique is fair for us all to take it seriously. I saw it when I was looking at the value for money of Government programmes when I was at the National Audit Office. And I see it now in our work at the Office for Statistics Regulation. 

Evaluation is for me a great antidote to these problems. By committing to evaluation, as it is doing through the Evaluation Task Force, Government is investing in corporate memory. It lays down the groundwork for a commitment to gathering and retaining evidence of what works, and, crucially, how and why it works. By committing to evaluation, Government is creating an intelligent form of accountability – not the theatre of blame and finger-pointing, but of a clear-headed consideration of what has actually happened as policy is implemented. And by a relentless focus on real-world evidence, evaluation combats Maddox’s air of unreality. 

It aligns with a lot of what we champion at the Office for Statistics Regulation. We have emphasised the importance of analytical leadership in Government – how Government and society benefits when the analytical function is not simply a passive provider of numbers, but a key partner in the work of Government. And this requires the analytical function to be full of leaders, who can articulate the benefits of analysis, and make it relevant and useful – not just to policy makers but to a wider public.  

And we champion public confidence in data produced by Government. This public confidence is secured by focusing on trustworthiness, quality and value. 

Analytical leadership, and the triumvirate of trustworthiness, quality and value, are central to securing the benefits of evaluation. Analytical leadership matters because to do great evaluation requires clarity of vision, strong objectives, and long-term organisational commitment. 

And trustworthiness, quality and value are central to good evaluation: 

  • Trustworthiness is having confidence in the people and organisations that produce statistics and data – being open to learning what works and what doesn’t, and open about the use of all evaluation, giving advance notice about plans and sharing the findings. The commitments to transparency that the Evaluation Task Force is making are crucial in this regard.
  • Quality means data and methods that produce assured statistics – it ensures the evaluation question is well defined and the right data and robust methods are selected, tested, and explained.
  • Value supports society’s needs for information – it means evaluation can be impactful, addressing the right questions and ensuring the correct understanding of the evidence.

Of course, I don’t claim for a second that evaluation is the sole or perfect panacea for the challenges of government. That too would be an exaggeration. But I do think it has tremendous potential to help shift the way government works. Led in the right way, and adhering to the principles of TQV, evaluation can make a big difference to the way government operates. 

That is why I applaud the energy and focus of the Evaluation Task Force, which has galvanised interest and attention. It’s why I like the Evaluation Task Force’s website and why I celebrate this week’s conference. 

And it is why I love evaluation. 

 

Letting the good data shine: The state of the UK system of statistics 2021

At OSR we’ve long been concerned about the risks that a world of abundant information and misinformation could lead to a catastrophic loss of confidence in statistics and data – and that our public conversation, cut loose from any solid foundations of evidence and impartial measurement, would be immeasurably weakened as a result. That is, at root, what we exist to prevent. I have written about this before as a form of statistical Gresham’s Law – how the risk is that the bad data drive out the good, causing people to lose confidence in all the evidence that’s presented to them in the media and social media.

I’ve also said that this is not inevitable, and indeed we can easily envisage a reverse effect: the bad data being driven out by the good data – that is, the trustworthy, high quality, high value data.

What it takes to secure this positive outcome is a public sector statistical system focused on serving the public good. A system that does not regard official statistics as just a Number, shorn of context, calculated in the way it always has been done, some kind of dusty relic. Instead a system that regards the production of statistics as a social endeavour: engaging with users of statistics, finding out what they want and need to know, and responding in a flexible and agile way to meet those needs.

The pandemic has really tested the public sector statistical system and its ability to provide the good data, the trustworthy, high quality, high value data. The pandemic could have seen us being overwhelmed with data from a wide range of sources, some less reliable than others. It could also have seen Government statistics retreating to the Just a Number mindset – “we just count cases, it’s up to others to decide what the numbers mean”. But the system has not done this. Instead, as our report published today shows, the statistical system has passed the test with flying colours.

Statistical producers (producers) across the UK nations and system-wide have responded brilliantly. They have shown five attributes. It’s easy to see these attributes in the work of public health statisticians and ONS’s work on health and social care statistics. They have done great things. But what’s clear to us is that these attributes are system-wide – appearing in lots of producers of statistics and across lots of statistical areas.

 Responsive and proactive

Producers across the UK governments have been responsive, proactive and agile in producing data and statistics to support policy and to provide wider information to the public which really adds value. For example the ONS launched the Coronavirus (COVID-19) Infection Survey in swift response to the pandemic. The survey provides high-quality estimates of the percentage of people testing positive for coronavirus and antibodies against coronavirus. These statistics provide vital insights into the pandemic for a wide range of users, including government decision-makers, scientists, the media and the public that are essential for understanding the spread of the virus, including the new variants.

 Timely

Producers have responded impressively to the need for very timely data to ensure that decisions around responses to the pandemic are based on the most up-to-date evidence. For example, the ONS published the first of its weekly Economic activity and social change in the UK, real-time indicators (previously called Coronavirus and the latest indicators for the UK economy and society) publications in April 2020, one month after the UK first went into lockdown and has continued to do so ever since. The publication contains a series of economic and social indicators (for example, card spend, road traffic and footfall), which come from a variety of different data sources. These assist policymakers with understanding the impact of the pandemic and gauging the level of overall economic activity. During the early weeks of the Covid-19 pandemic, the Department for Transport rapidly developed near-to-real-time statistics about Transport use during the coronavirus (COVID-19) pandemic. The statistics were regularly used by No10 press conferences (example in slide 2) to show the change in transport trends across Great Britain and gave an indication of compliance with social distancing rules.

Collaborative

Collaboration and data sharing and linkage have been a key strength of both the UK statistical system and the wider analytical community over the past year. This more joined-up approach has improved our understanding of the impact of the pandemic both on public health and on wider areas such as employment and the economy. For example, during the pandemic, ONS and HMRC accelerated their plans to develop Pay as You Earn (PAYE) Real Time Information (RTI) estimates of employment and earnings. The Earnings and employment from PAYE RTI is now a joint monthly experimental release that draws from HMRC’s PAYE RTI system which covers all payrolled employees and therefore allows for more detailed estimates of employees, rather than a sample based approach, as well as information on pay, sector, age and geographic location.

Clear and insightful

We have seen some good examples of clearly presented and insightful statistics which serve the public good. For example, Public Health England (PHE) developed and maintain the coronavirus (COVID-19) UK dashboard. This dashboard is the official UK government website for epidemiological data and insights on coronavirus (COVID-19). The dashboard was developed at the start of the pandemic to bring together information on the virus into one place to make it more accessible. Initially it presented information for the UK as a whole and for the four UK countries individually. Over time it has developed so that data are now available at local levels. We have seen the increasing use of blogs to explain to users how the pandemic has affected data collection, changes to methodologies and bring together information available about the pandemic. For example, the Scottish Government has blogged about analysis and data around COVID-19 available for Scotland. We have also seen examples of statisticians engaging openly about data and statistics and their limitations, both within and outside government, helping the wider understanding of this data and statistics. For example, Northern Ireland Statistics and Research Agency (NISRA) statisticians have introduced press briefings to explain their statistics on weekly deaths due to COVID-19. The Welsh Government Chief Statistician’s blog is a regular platform for the Chief Statistician for Wales to speak on statistical matters, including providing guidance on the correct interpretation of a range of statistics about Wales.

Transparent and trustworthy

For statistics to serve the public good they must be trustworthy, and this includes statistics being used and published in an open and transparent way. We have seen efforts to put information in the public domain and producers voluntarily applying the Code of Practice for Statistics (‘the Code’) to their outputs. For example, the Department of Health and Social Care (DHSC) publishes weekly statistics about the coronavirus (COVID-19) NHS test and trace programme in England. DHSC has published a description about how the pillars of the Code have been applied in a proportionate way to these statistics. However, inevitably the increased volume of and demand for data has placed a greater burden on producers and led to selected figures being quoted publicly when the underlying data are not in the public domain.

But our report also shows how the system cannot take these 5 attributes for granted. What has been achieved in the high pressure environment of a pandemic must be sustained as we ease out of being a pandemic society. New challenges – like addressing regional imbalances, or moving to a greener economy or addressing issues like loneliness and inequality – cannot be understood using objective statistics if the system retreats into the Just a Number mentality.

So, our report sets a number of recommendations. The recommendations aim to make sure that the statistical system we have seen in the pandemic is not an aberration, but is – in the classic pandemic phrase – the new normal. A system that can harness these five attributes is one that serves the public good. It is the best way to ensure that the bad data do not thrive and the good data shine out.

 

 

 

 

Glimmers of light for adult social care statistics

I was very interested in a recent Social Finance report on how to secure impact at scale. One of their points is that, if you want to see impact at scale, you need to be willing to seize the moment. Change arises when supportive policies and legislation fall into place, and when a new public conversation starts.

This idea – new policies, and new public conversations – made me think of social care statistics. It’s very tragic that it has taken the disastrous impact of the pandemic in care homes to focus attention on this issue, but there seems to be some potential for progress on the statistics now.

The background is that we’ve been shouting about the need for better statistics for the last couple of years. We’ve done so through reports on social care statistics in England , Scotland and Wales . We’ve done it through presentations and I’ve taken the opportunity to highlight it when I’ve given evidence at the Public Administration Committee in the House of Commons.

Certainly, we found some excellent allies in Future Care Capital and the Nuffield Trust, yet it has sometimes felt like we’re in the minority, shouting in the wilderness.

What were our concerns? Our research in 2020 highlighted that there were several challenges and frustrations related to adult social care data that were common to England, Scotland and Wales. Our report summarising the common features of the statistics across Great Britain highlighted four key needs to help both policymakers and individuals make better informed decisions about social care:

  • Adult social care has not been measured or managed as closely as healthcare, and a lack of funding has led to under investment and resourcing in data and analysis.
  • There is an unknown volume and value of privately funded provision of adult social care. Although data is collected from local authorities, this only covers activities that they commission and fund, which constitute a smaller proportion of total adult social care activity.
  • Robust, harmonised data supply to ensure comparable statistics from both public and private providers is problematic, as data collection processes are not always standardised. Furthermore, data definitions might not always be consistent across local authorities and other providers.
  • Data quality is variable within and across local authorities, with inconsistent interpretation of data reporting guidance by local authorities. This means that data isn’t always reliable and so users have difficulty trusting it.

As data issues go, as the pandemic has highlighted, there is not so much a gap as a chasm, with consequences to our understanding of social care delivery and outcomes.

Most people we’ve talked to, inside and outside the UK’s governments, recognise these issues. But to date there hasn’t been much evidence of a sustained desire to inject energy into the system to effect change.

Maybe, though, there are glimmers of light. Whilst this list is not meant to be exhaustive, I would like to draw attention to some initiatives that have caught my eye.

  • The first comes from an extremely negative space. That is the pandemic’s impact on those in care homes. Not only has the pandemic highlighted the importance of care and care workers, it has also led to much more interest in data about the care home sector. The Care Quality Commission and the Office for National Statistics (ONS) collaborated to publish timely information on the numbers of deaths in care homes , to shine a light on the impact of the pandemic for this vulnerable population. And DHSC has commenced the publication of a monthly statistics report on Adult social care in England to fill a need for information on the social care sector itself. This means that COVID-19 has resulted in people listening to analysts and statisticians when we raise the problem with social care data now. Of course, the questions people are interested in go well beyond COVID-19.
  • The Department for Health and Social Care’s draft data strategy for England makes a significant commitment to improving data on adult social care.
  • The Goldacre Review for data in England may present a further opportunity for improvement.
  • I was pleased to restore the National Statistics designation to the Ministry of Housing, Communities and Local Government’s statistics report about local authority revenue.
  • Beyond the pandemic, ONS is working in collaboration with Future Care Capital to shine a light on one of the biggest data gaps here: the private costs borne by individuals and families for care. And ONS has recently published estimates of life expectancy in care homes prior to the pandemic.
  • Adult social care remains high on the political agenda in Scotland, with the recently published independent review of adult social care by the Scottish Government and the inquiry by Scotland’s Health and Sport Committee.
  • The Welsh Government remains committed to improving the data it captures on social care .

It’s far too early to declare “problem solved”, but we ought to be optimistic about improvements to data as a consequence. We’ll be reviewing the actions currently underway as statistics producers react to the gaps in social care statistics highlighted by the pandemic and publishing a comprehensive report of our findings in the autumn.

What I do think is that there is an opportunity here – if statistics producers across the UK are willing to take it, we can anticipate much better statistics on this sector. And a much better understanding of the lives and experiences of citizens who receive, and provide, social care.