The Power of Conversation

In our latest blog, Head of OSR Ed Humpherson discusses our consultation on a revised Code of Practice, which  is open for comments until 14 February 2025. Read more about the consultation and how to have your say here. 

I have been asking myself why it is only now that I am writing a blog on our consultation on a revised Code of Practice, several weeks after its launch.

The consultation is big news for OSR and for the UK statistical system: the Code is our foundational set of principles, our conceptual framework, our guiding light. And it’s not as if we are proposing some mere tidying-up measures, the sort of pruning and weeding that a good gardener does to maintain their garden. We are proposing some significant landscaping changes – particularly to the structure and presentation of the Code.

Perhaps the answer comes down to my observation that most endeavours in the world of statistical regulation depend on, and are enriched by, conversation. OSR’s best work – our most impactful reports and interventions – is effective because of our engagement and interaction with users of statistics, both expert and not, and with the people who produce the statistics.

To give two examples: first, our annual state of the statistics system report is not dreamt up by us in a meeting room; it builds on a whole series of conversations across the statistical system, both with users and producers. Second, our assessments of individual statistics draw heavily on engagement with users; take a look at our recent assessment of ONS’s Price Index of Private Rents to see this in action.

Launching a consultation is not an end point in itself. It is an invitation to other people to share their thoughts, reflections and criticisms.

Moreover, the part of the work I particularly enjoy is not the sense of achievement on the day of publication. It’s hearing all the subsequent reactions and comments, and joining in the discussions that ensue.

That’s why I was so happy last week to participate in a joint event between OSR and the Royal Statistical Society (RSS) to discuss the new proposed Code. We heard a range of interesting and thought-provoking reactions, such as those from Paul Allin, Honorary Officer of the RSS for National Statistics, on the importance of recognising the public role of statistics; from Ken Roy, an independent researcher and former head of profession in a government department, who highlighted that the Code is the glue that holds together the large and complex UK statistics system; and from Deana Leadbeter, Chair of the RSS Health Statistics User Group, who welcomed the ambition of a more digestible Code for a wider audience. And we had some excellent questions from the audience on topics ranging from the limits to trustworthiness (from a colleague in the Hungarian national statistical institute) to the importance of simplicity.

These productive conversations are why I’m looking forward to the debates and dialogues around the new Code in the coming months – including those with the Market Research Society and the Forum of Statistics User Groups.

I want to hear people’s reactions to the new Code, including their views on:

And I want to hear a wide range of other thoughts – not just about the things that we want to highlight, like those three bullets above – but the things we have missed.

This emphasis on engagement and conversation is not only a core value for OSR. It’s also central to the Code of Practice itself. The new Code that we are proposing sets even clearer and firmer requirements for statistics producers in how they should engage with their users and transparently communicate how they have produced their statistics, and what their statistics do (and don’t) mean.

So, back to the question at hand: why didn’t I write this blog until now? It’s this: for me, the day the consultation is published is not always the best day to publish a blog. Instead, it can be better to wait until we’ve started to hear some views. Or, to put it simply: communication shouldn’t be about broadcasting a fixed view. Instead, it’s all about the power of conversation.

Read more about the Code consultation and how to have your say here. 

[1] What is it with me and gardens? I used to do a presentation all about walled gardens – how official statistics can’t be a walled garden, pristine but closed off from the world. They need to be open and accessible. Now, as then, I reach for a garden metaphor. It can’t be that I use these gardening analogies because I myself am an adept and successful gardener. I mean, you should just look at my own garden to realise that.

 

The importance of separation: Ed Humpherson addresses questions raised by the Lievesley review

In our latest blog, Director General for OSR, Ed Humpherson, speaks about how OSR’s separation from ONS is communicated

Since Professor Denise Lievesley published her review of the UK Statistics Authority, most attention has been on the Statistics Assembly. The Lievesley review considered the work of the UK Statistics Authority, and its first recommendation was that the Authority should hold a Statistics Assembly every three years. The Assembly should elicit and explore the priorities for the statistical system with a wide range of users and stakeholders. And the first Assembly will present a fantastic opportunity to have a wide-ranging conversation about statistics and enable a discussion about the strengths and limitations of the current statistical landscape in the United Kingdom. So, it’s not surprising that this recommendation has generated a lot of interest. 

The Lievesley review raised other important issues. Some of these issues relate to OSR. In particular, she highlighted that OSR needs to improve how it communicates the separateness of its role from ONS.  

Separation matters to us. Indeed, when we look at official statistics, we start by considering the trustworthiness of the governance processes used by the statistics producers – by which we mean the mechanisms in place to ensure and demonstrate that the statistics are free from the vested interest of the organisation that produces them – that they are the best professional judgement of the statisticians. 

Similarly, it’s important that our decisions reflect our best professional judgement and that, in relation to ONS, we can make those judgements without giving any weight to ONS’s own organisational interests. 

We have several arrangements in place to secure our separation from ONS. But if people don’t appreciate or understand them, they are not working. And the Lievesley review made clear that we need to better explain the processes that assure this separation to our stakeholders and the public. That’s why today we are publishing a statement that sets out, in formal terms, the arrangements that underpin our separation from ONS. 

The key features of the statement are: 

  • a summary of the governance arrangements, under which we report our work to a separate Committee of the Authority Board made up only of non-executive members and me – that is, with no membership role for ONS staff. 
  • a summary of the arrangements for setting strategy and budget, which involve me and my team making our proposals directly to this Committee and to the Board, with no decision-making role for ONS staff 
  • a confirmation of my own personal reporting lines, which mean that I do not report to the National Statistician in any way, but directly to the Authority’s Chair; and that I have regular meetings with the Chair without the National Statistician or any senior ONS staff attending 

Please read the statement if you’d like to learn more about these arrangements. 

But I’ll close with Denise’s own words. The most important features of any governance set-up are the budget and the performance management. And on this, she was clear:  

The existence of a small number of misunderstandings by users also appear to perpetuate, such as that the Director General for Regulation is line managed by the National Statistician (he is not) or that the National Statistician controls the budget of the OSR (he does not). Nor does the National Statistician attend Regulation Committee meetings.” 

I hope the statement we are publishing today helps provide some reassurance and address the issues of perception identified by Denise’s review.  

Embedding the habit of intelligent transparency

In our latest blog, Director General for OSR, Ed Humpherson looks at the importance of intelligent transparency for Governments across the UK.

Intelligent transparency is one of the most important set of principles that we uphold. When statistics and quantitative claims are used in public debate, they should enhance understanding of the topics being debated and not be used in a way that has the potential to mislead.  To help those making use of data in public debate, we have set out our three underpinning principles of intelligent transparency. These principles demand that statistical claims and statements are based on data to which everyone has equal access, are clearly and transparently defined, and for which there is appropriate acknowledgement of any uncertainties and relevant context.

We have promoted intelligent transparency to the UK Government and the Governments of Scotland, Wales and Northern Ireland. And the Chair of the UK Statistics Authority, Sir Robert Chote, set it out clearly in his letter to political party leaders ahead of the 2024 general election. We have also made a number of interventions to support the principle of equal access.

 

Intelligent transparency in conference speeches

Equal access means that all statements involving statistics or data must be based on publicly available data, preferably the latest available official statistics where possible. Claims should not be made based on data to which ministers have privileged access, as this prevents the claims from being scrutinised and undermines confidence in official statistics.

We recognise that conference speeches by Ministers in particular can be difficult, as noted in this blog we published in September. Ministers want to highlight policy successes to their party members, but they do not have the input of the civil servants who would normally ensure that their statements use statistics and data appropriately.

In this context, we were recently made aware of a statement made by the Prime Minister, Sir Keir Starmer at the Labour Party Conference regarding immigration returns. The claim in question was that there has been “a 23 per cent increase in returns of people who have no right to be here, compared with last summer”. At the time the Prime Minister made this claim there was no Home Office data or statistics available in the public domain for the relevant time period to support this statement.

 

The importance of publishing data used in the public domain

Following the statement made by the Prime Minister, we engaged with the Home Office and we welcome the ad-hoc statistical release published by the Home Office, which provides the underlying data that relate to this statement. In most cases we would want to see the release of unpublished data very soon after the statement itself. But we do understand that sometimes, as in this case, providing the underlying data in a usable format may take longer. We consider that the approach taken by the Home Office is in line with the principles of intelligent transparency. It is a good example of how to resolve a situation when unpublished information has found its way into a speech.

 

Working with Government to ensure best practice

 We are using this example as an opportunity to re-emphasise the importance of intelligent transparency to a range of audiences, including Heads of Profession for Statistics, the wider Government Analysis Function, and the Government Communications Service. We have also discussed the issue with officials in Number 10, who have confirmed the importance they attach to appropriate use of data. We are happy to provide support and advice to departments on the principles of intelligent transparency for analysts, communication professions, special advisers, and any other colleagues who may benefit. To discuss this with us please contact us via regulation@statistics.gov.uk.

For all Governments, it is important to avoid the use of unpublished information in the public domain. We are always happy to work with officials and advisers to embed good habits of intelligent transparency as fully as possible.

 

Whose line is it anyway? Why the misleading presentation of statistics cannot be dismissed as just a matter of opinion

“Statistics may be used to provoke and to challenge the status quo; but Ofcom is entitled to insist – in the public interest – that they should not be misused so as to mislead”. Not our words but the words of a recent High Court Judgement following an intervention by Ofcom to challenge the presentation on GB News of vaccine statistics produced by the UK Health Security Agency (UKHSA).

When concerns are raised to OSR about the communication or presentation of statistics, we are often asked to comment on whether we judge something to be misleading.​ Misleadingness, how to define it and what this means in our context as a statistics regulator, is something that we routinely come back to.

Being able to intervene and publicly challenge the misuse of statistics is a crucial part of meeting our statutory objective of ‘promoting and safeguarding the production and publication of official statistics that serve the public good’ under the Statistics and Registration Service Act 2007. When statistics are misused, it damages public confidence in data and those communicating the messages.

As this High Court Judgement shows, we are not alone in tackling misuse of data and misleading communication. The High Court supported Ofcom’s right to intervene and emphasised that the misleading presentation of statistics cannot be dismissed as just a matter of opinion.

“The purpose of both the caveat on effectiveness and the contextual statement was to sound a warning against the simple and undifferentiating comparison of groups. Yet, an undifferentiating comparison was undertaken on the Show”. – High Court Judgment

This sets a valuable formal legal precedent and echoes many of OSR’s messages around the importance of pre-empting statistics being used in a way that has the potential to mislead. We often set recommendations to producers in our regulatory work that state the importance of highlighting information on quality, caveats and context alongside the statistics to support appropriate use. But it’s just as important to do these things to prevent misuse of the data – which UKHSA had done in this case.

“We present data on COVID-19 cases, hospitalisations and deaths by vaccination status. This raw data should not be used to estimate vaccine effectiveness as the data does not take into account inherent biases present such as differences in risk, behaviour and testing in the vaccinated and unvaccinated populations.” COVID-19 vaccine surveillance report, UKHSA

The Ofcom case demonstrates why our advice and the Code of Practice for Statistics should not be thought of as a tick box exercise. Preventing misuse of data is an essential part of protecting the value and impartiality of statistics which in turn serve the public good. When communicating quality, and uncertainty in particular, it can be tempting for statistics producers to fall back on generic wording such as ‘users should exercise caution’ but as my recent blog highlights, these statements don’t go far enough in supporting use and interpretation of the statistics.

It is much harder to react quickly to and debunk misuse once it has happened, especially in an age of social media, so effort should be made to prevent it happening in the first place. We will continue to work on identifying best practice around how to prevent misuse and to support producers of statistics to communicate statistics through different means, so that they can be understood by audiences with varying levels of understanding.

A good rule of thumb is to consider how a reasonable person would interpret the statement being made and ensure that this is not likely to be misleading in the absence of additional information.

Ed Humpherson reflects on why communicating uncertainty is a constant challenge for statisticians

In our latest blog Director General for Regulation, Ed Humpherson, reflects on why communicating uncertainty is a constant challenge for statisticians

A recurring motif of OSR reports is the need to communicate uncertainty better. This was the theme of an article I recently published in the Journal of Risk Communication, which explores how uncertainty is presented in official statistics.

What is the issue?

Start with GDP. When we reviewed ONS’s process for revising GDP, uncertainty and its communication was one of our main requirements for improvement. We asked ONS to improve the presentation of early estimates of GDP and all supporting information it provides on the uncertainty surrounding these estimates. ONS should ensure that the uncertainty surrounding GDP statistics is in itself a key message.

I also chaired a discussion of uncertainty with Chris Giles of the FT, Johnny Runge of Kings College London, Marianthi Dunn (OSR’s lead author on GDP revisions), and Sumit Dey-Chowdrey of ONS.

It wasn’t just GDP where this was an issue. We highlighted communicating uncertainty for labour market statistics (see the section and requirement on “volatility” on page 4) and research and development statistics (see requirement 2).

So it comes up a lot in our work.

Why is it so difficult?

Well, first of all, as I argue in the journal article, it’s just hard. I think there may be an innate tendency for people to see a number and assume it has a certainty, an absolute rightness. And this in turn may be deeply embedded because of our experience of learning maths as children: we see a subject in which answers are definitively right or wrong.

And I also think this is exacerbated by an assumption that statistics are counts of fixed things – the economy, population, crime. It’s only when you spend time understanding the process of compiling official statistics that you realise that the numbers are often more complicated than a simple count. They are driven by assumptions, definitions and methodological choices.

This psychological anchoring on ‘hard’ numbers is exacerbated by the way statistics are used in public debate. Numbers are slotted into political arguments – perhaps weaponised, even – without nuance or context. The percentage increase in GDP. The unemployment rate. Crime going up.

These factors reinforce each other. They mean that people communicating official statistics have a big challenge. They must convey what they know about their statistics – that is, that they are best estimates, not certain numbers – to audiences that often are expecting single, fixed results.

In the face of these phenomena it can be tempting for statistics producers to fall back on generic wording. As we found when we did a review of communicating uncertainty, producers can often say generic things like “users should exercise caution”.

I do not underestimate the challenge facing producers. But is there a way to think about uncertainty that might help?

I think that it’s not always recognised that there are two distinct categories of uncertainty. My article puts forward two concepts:

  • Specific uncertainty: This type of uncertainty is bound up with the process of collecting and estimating that is at the heart of statistics. It involves the recognition that the estimate is only an approximation, and a range of other estimates could also be plausible. This is perhaps the more familiar notion of uncertainty that arises in surveys – that the survey result represents a central estimate around which there is a confidence interval.
  • Contextual uncertainty: This is less about the process of production, and more about the inferences that users may draw, for policy and other decision purposes, from statistics. Numbers may not mean what users think they mean; and this may lead them to place unwarranted weight on them for decision making purposes.

As an analogy, consider maps. To quote my article: “There is a difference between the map-maker’s perspective, and the map-reader’s perspective. The map maker may successfully communicate the conventions that go into the production of the map – what different icons mean, how contour lines are measured, and so on. That is useful. But it is quite distinct from the map reader’s desire to use the map to determine how to undertake a journey, which depends on their inferences about the best route, the paths to avoid, the steeper hills to navigate around.”

My simple proposition is that producers may struggle because they don’t separate these two concepts. And they often focus on the first – the technical communication of uncertainty – without paying sufficient attention to the second – how people may use the numbers.

Of course, it might be said that the standard boilerplate phrasing “users should treat the number with caution” is a nod towards contextual uncertainty. But it’s not a helpful one, and wherever it arises, producers should provide some more useful steers to the users.

So, to save you the trouble of reading the whole article, here’s a summary of what it says about communicating statistics:

  • It’s hard
  • It’s not always done well
  • It’s helpful to distinguish these two types: specific and contextual

And one thing is clear – generic wording like “users should employ caution” should be avoided. If my article and OSR’s overall body of recommendations achieves anything, I hope it consigns this generic wording to history.

Transparency: bringing the inside out

In our latest blog Director General for Regulation, Ed Humpherson, discusses the divergence between internal positivity and external scepticism about analysis in Government, and how transparency is key to benefitting the public good…

Seen from within Government, these are positive times for analysis. There is an analysis function, headed by Sir Ian Diamond, which continues to support great, high-profile analytical work. There are strong professions, including economists, statisticians, operational researchers and social researchers, each with strong methods and clear professional standards. There is an Evaluation Task Force, which is doing great things to raise the profile of evaluation of policy. And data and analysis are emphasised by Ministers and civil service leaders like never before – exemplified by the 2023 One Big Thing training event focused on use of data and analysis in Government.

Yet the perspective from outside Government is quite different. The Public Administration and Constitutional Affairs Select Committee has been undertaking an inquiry into Transforming the UK’s Statistical Evidence Base. Several witnesses from outside Government who’ve given evidence, and some of the written evidence that has been provided, highlights concerns about the availability of analysis and how it’s used. In particular, witnesses questioned whether it’s clear what evidence sources inform policy decisions.

What explains this divergence between internal positivity and external scepticism?

In my view, and as I said in my own evidence before the Committee, it all comes down to transparency. By this I mean: the way in which analysis, undertaken by officials to inform Ministers, is made available to external users.

This is highly relevant to the Committee’s inquiry. A key question within the inquiry is the way in which external users can access analysis undertaken within Government.

These questions are very relevant to us in OSR. We have developed the principle of Intelligent Transparency. You can read more here, but in essence, Intelligent Transparency is about ensuring that, when Government makes statements using numbers to explain a policy and its implementation, it should make the underlying analysis available for all to see.

As I explained to the Committee, we make interventions when we see this principle not being upheld – for example, here and here. When we step in departments always respond positively, and the analysts work with policy and communications colleagues to make the evidence available.

My basic proposition to the Committee was that the more Government can comply with this principle, the more the gap between the internal insight (there’s lots of good analysis) and the external perception (the analysis isn’t used or made available), will close. This commitment to transparency should be accompanied by openness – the  willingness to answer questions raised by users; and a willingness to acknowledge the inherent limitations and uncertainties within a dataset.

In terms of what we do at OSR, I wouldn’t see any point, or value, in us going upstream to consider the quality of all the analysis that circulates within Government.

Our role is about public accessibility and public confidence – not about an internal quality assurance mechanism for economics, operational research, social research and other types of analysis undertaken in Government. We are not auditors of specific numbers (ie a particular figure from within a statistical series) – something we have to reiterate from time to time when a specific number becomes the focus of political debate. Nor do we have the resources nor remit to do that. But we DO have both the capacity and framework to be able to support the appropriate, transparent release and communication of quantitative information.

This is the heartland of our work on statistics, and it’s completely applicable to, say, economic analysis of policy impacts, or evaluations of the impact of Government policy. There are good arrangements for the quality of economic analyses through the Government Economic Service (GES), and the quality of evaluations through the Evaluation Task Force (ETF); and similarly for the other disciplines that make up the Analysis Function. The ETF is a new kid on this particular block, and it is a great innovation, a new force for driving up the standards and openness of Government evaluations.

Where we add value is not in duplicating the GES, or ETF, or similar professional support structure within Government. Indeed, we already work in partnership with these sources of support and professional standards. Our expertise is in how this quantitative information is communicated in a way that can command public confidence.

In short, then, it really does come down to a question of transparency. As I said to the Committee, it’s like a garden in the early morning. Some of it is in the sunlight already, and some of it still in shade. Gradually, we are seeing more and more of the lawn come into the sunlight – as the reach of transparency grows to the benefit of the public.

The success and potential evolution of the 5 Safes model of data access

In our latest blog Ed Humpherson, Director General for Regulation discusses the 5 Safes model as a key feature to support data sharing and linkage…


In OSR’s data linkage report , we highlighted the key features of the data landscape that support data sharing and linkage. The 5 Safes model is one of those. Yet we also recommended that the 5 Safes model is reviewed. In this blog, I want to focus on one aspect of the model and set out the case for a subtle but important change.

The 5 Safes model is an approach to data use that has been adopted widely across the UK research community, and has also been used internationally. It is well-known and well-supported and has had a significant impact on data governance. It is, in short, a huge success story. (And for a short history, and really interesting analysis, see this journal article by Felix Ritchie and Elizabeth Green).

The 5 Safes are:

  • Safe data: data is treated to protect any confidentiality concerns.
  • Safe projects: research projects are approved by data owners for the public good.
  • Safe people: researchers are trained and authorised to use data safely.
  • Safe settings: a SecureLab environment prevents unauthorised use.
  • Safe outputs: screened and approved outputs that are non-disclosive.

Any project that aims to use public sector administrative data for research purposes should be considered against the 5 Safes. The 5 Safes therefore is used to set a criteria-based framework for providing assurance about the appropriateness of a particular project.

OSR’s recommendations relevant to the 5 Safes:

In July 2023, OSR published our report on data sharing and linkage in government. We had a range of findings. I won’t spell them out here, but in short, we found a good deal of progress across Government, but some remaining barriers to data sharing and linkage. We argued that these barriers must be addressed to ensure that the good progress is maintained.

We made two recommendations relevant to the 5 Safes:

  • Recommendation 3: The Five Safes Framework Since the Five Safes Framework was developed twenty years ago, new technologies to share and link data have been introduced and data linkage of increased complexity is occurring. As the Five Safes Framework is so widely used across data access platforms, we recommend that the UK Statistics Authority review the framework to consider whether there are any elements or supporting material that could be usefully updated.
  • Recommendation 10: Broader use cases for data To support re-use of data where appropriate, those creating data sharing agreements should consider whether restricting data access to a specific use case is essential or whether researchers could be allowed to explore other beneficial use cases, aiming to broaden the use case were possible.

We made the recommendation about reviewing the framework because a range of stakeholders mentioned to us the potential for updating the 5 Safes model, in the light of an environment of ever-increasing data availability and ever-more powerful data processing and analysis tools.

And we made the recommendation about broader use cases because this was raised with us as an area of potential improvement.

The use of 5 Safes in research projects

What brings the two recommendations together is the 5 Safes idea of “safe projects”. This aspect of the model requires research projects to be approved by data owners (essentially, the organisations that collect and process the data) for the public good.

For many research activities, this project focus is absolutely ideal. It can identify how a project serves the public good, what benefits it is aiming to bring, and any risks it may entail. It will require the researcher to set out the variables in the data they wish to explore, and the relationships between those variables they want to test.

For some types of research, however, the strictures of focusing on a specific project can be limiting. For example, for a researcher who wants to establish a link between wealth and some aspects of health may not know in advance which of the variables in a wealth dataset, and which of the variables in a health data set, they wish to examine. Using the “safe project” framing, they might have to set out specific variables, only to discover that they are not the most relevant for their research. And then they might have to go back to the drawing board, seeking “safe project” approval for a different set of variables.

Our tentative suggestion is that a small change in focus might resolve these problems. If the approval processes focused on safe programmes, this would allow approval of a broad area of research – health and wealth data sets – without the painstaking need to renew applications for different variables within those datasets.

What I have set out here is, of course, very high level. It would need quite a lot of refinement.

Other expert views on the 5 Safes

Recognising this, I shared the idea with several people who’ve spent longer than me thinking about these issues. The points they made included:

  • Be careful about placing too much emphasis on the semantic difference between programmes and projects. What is a programme for one organisation or research group might be a project for another. More important is to establish clearly that broader research questions can be “safe”. Indeed, in the pandemic, projects on Covid analysis and on Local Spaces did go ahead with a broader-based question at their heart.
  • This approach could be enhanced if Data Owners and Controllers are proactive in setting out what they consider to be safe and unsafe uses of data. For example, they could publish any hard-line restrictions (“we won’t approve programmes unless they have the following criteria…”). Setting out hard lines might also help Data Owners and Controllers think about programmes of research rather than individual projects by focusing their attention on broader topics rather than specifics.
  • In addition, broadening the Safe Project criterion is not the only way to make it easier for researchers to develop their projects. Better meta data (which describe the characteristics of the data) and synthetic data (which create replicas of the data set) can also help researchers clarify their research focus without needing to go through the approvals process. There have already been some innovations in this area – for example, the Secure Research Service developed an exploratory route that allows researchers to access data before putting in a full research proposal – although it’s not clear to me how widely this option is taken up.
  • Another expert pointed out the importance of organisations that hold data being clear about what’s available. The MoJ Data First programme provides a good example of what can be achieved in this space – if you go to the Ministry of Justice: Data First – GOV.UK (www.gov.uk) you can see the data available in the Datasets section, including detailed information about what is in the data.
  • Professor Felix Ritchie of the University of West England, who has written extensively about data governance and the 5 safes, highlighted for me that he sees increasing “well-intentioned, but poorly thought-through” pressure to prescribe research as tightly as possible. His work for the ESRC Future Data Services project sees a shift away from micro-managed projects as highly beneficial – after all, under the current model “the time risk to a researcher of needing a project variation strongly incentivises them to maximise the data request”.

More broadly, the senior leaders who are driving the ONS’s Integrated Data Service pointed out that the 5 Safes should not be seen as separate minimum standards. To a large extent, they should be seen as a set of controls that work in combination – the image of a graphic equaliser to balance the sound quality in a sound system is often given. Any shift to Safe Programmes should be seen in this context – as part of a comprehensive approach to data governance.

Let us know your thoughts

In short, there seems to be scope for exploring this idea further. Indeed, when I floated this idea as part of my keynote speech at the ADR UK conference in November, I got – well, not quite a rapturous reception, but at least some positive feedback.

And even if it’s a small change, of just one word, it is nevertheless a significant step to amend such a well-known and effective framework. So I offer up this suggestion as a starter for debate, as opposed to a concrete proposal for consultation.

Let me know what you think by contacting DG.Regulation@Statistics.gov.uk.

Remember how far you’ve come, not just how far you want to go

In our latest blog, Director General Ed Humpherson reflects on what OSR has achieved and our plans for the future…

At OSR, we are constantly striving to improve. We recently published our Business Plan for 23/24 which outlines our ambitions to support the transformation of statistics and support improved communication of statistics, and to build partnerships with other organisations that focus on the public value of statistics and data.

But as the saying goes, it’s important to acknowledge how far we’ve come, as well as where we want to be. So I’d like to begin by setting out what we do.

The Office for Statistics Regulation is the regulatory arm of the UK Statistics Authority. Our role is to promote and safeguard the production and publication of official statistics in the UK.

There are three elements to our role:

  • how statistics are produced – conducting reviews on the statistics produced by government bodies, including awarding the National Statistics designation;
  • how statistics are used – looking at how statistics are used in public debate; and
  • how statistics are valued – promoting the value of statistics.

We do all this underpinned by a coherent philosophy. For statistics to serve the public good, it is essential to recognise that they are not just numbers. They must be produced by trustworthy organisations. They must be high quality. And they must be valuable to users.

Our maturity model

 

Levels of regulatory maturity

When looking at where we are heading we use a regulatory maturity model.

At the lower end the regulator is operating at a static level, checking the compliance of individual statistics. This would be worthwhile, but wouldn’t be at all responsive to emerging user concerns.

At the next level of maturity, the regulatory becomes more dynamic, responding to emerging issues from users. But it would be piecemeal.

To mature beyond this, and reach the next level, the regulator should mature to become systemic – thinking about how to foster and support a system of statistics that is responsive to users.

The highest level of maturity model goes beyond the statistical system and recognises that for statistics to serve the public good requires a whole culture of respect for evidence, data and analysis.

Where we are now

How mature are we, assessed against this model? We are certainly delivering our core regulatory programme – which shows we are meeting the basic level of maturity – doing a wide range of assessments and other compliance reports during the year.

We are also responsive to emerging user concerns – for example, about ONS’s excess deaths statistics; or the exam algorithms in 2020; or about the population estimates, a set of concerns that first arose around the city of Coventry.

But this is something we do only partially. In my view there is a way to go to be better at anticipating these sorts of user concerns, and being more skilled at doing deep dives into specific issues that are raising questions.

We are also increasingly systemic – addressing the ability of the system to meet user needs more widely; for example through our state of the statistics system report and through our campaign on intelligent transparency. And some of this gets into the wider space of a public culture of data and evidence use, for example our work on statistical literacy. We really should develop this further: it’s hugely important.

What people tell us

We continually ask for feedback, and as part of the recent UKSA mid-term strategy review, users completed a survey including questions about OSR. Stakeholders told us that:

– we should continue to do deep dives, but only if they’re done properly. The recent Sturgis review of our work (link) shows where we can improve in this regard.

– we should continue to challenge poor practice.

– we should increase our visibility, and champion effective communication – things we need to more of.

– we should build partnerships with other organisations.

These all point to us needing to focus at the upper end of the maturity range – to be systemic and outwards focused.

OSR: Work in progress

So what does this all mean in terms of our development as a regulator?

In short, OSR has come a long way. But we are not at the top level of maturity yet. There is a lot we need to improve on – and that’s the intention of our business plan.

We’re keen to hear your views about our work and priorities. Please get in contact with us at regulation@statistics.gov.uk to let us know your thoughts, or if you would like a wider discussion with us.

What 2023 means for OSR

In our latest blog, Director General Ed Humpherson reflects upon the past year and sets out OSR’s priorities for the coming year. We are inviting feedback on these in order to develop our final 2023/24 Business Plan. If you have views on what the key issues in statistics are, please email regulation@statistics.gov.uk 

As the days shorten and the end of the year looms, in OSR we start turning our attention to the next financial year, starting in April 2023: to what we will focus on, what we want to achieve, what our ambitions are.  

This is a reflective and deliberative process: we don’t finalise our business plan for the 2023-24 year until April itself. And it’s also a consultative process: we want to hear from stakeholders about where you think we should focus our attention. 

How we develop our priorities

There are a range of inputs into our thinking. We start from our five year strategic plan, which sets out broadly where we want the statistical system to be by 2025, and how we will help the system get there. We form a judgement as to how close the system is to achieving this vision, and what we should focus on to help support it. 

We also draw on our annual state of the statistical system report (most recently published in July) on what it is telling us about what positive things in statistics we want to nurture and what challenges we want to address. And we also take stock of what we’ve achieved over the last 12 months: whether, in effect, we’ve successfully delivered last year’s priorities. 

But there are two further aspects to our business plan that are crucial. First, we can’t do everything. We are a small organisation, and necessarily we have to make prioritisation choices about where we can most effectively support the public good of statistics. And second, we can’t plan for everything. We have to respond to those areas that are salient in public debate, where statistics are helping or hindering public understanding. And we can’t always predict very well what those issues might be. To take an obvious example, our planning for 2020-21, taking place around this time three years ago, did not anticipate that we’d spend most of 2020-21 looking at statistics and data related to the pandemic. 

To help us make these choices, and to be better at anticipating what might be just over the horizon, we would like the input, advice and creativity of our stakeholders: of people who care about statistics and data, and who want to help us be as effective as we can be. 

Our 23/24 priorities

Today I am pleased to share with you our draft priorities for 2023/24. These are deliberately high level. We have not chosen our menu, selected our ingredients, and already got the cake ready for the oven. We want to leave lots of room for people outside OSR to make suggestions and raise questions. 

Our plan for next year has four high level priorities, all focused on different aspects of supporting change and transformation: 

  • Support and challenge producers to innovate, collaborate and build resilience 
  • Champion the effective communication of statistics to support society’s key information needs 
  • Continue to widen our reach beyond official statistics and official statistics producers 
  • Increase our capability as a regulator 

The keen observers among you might note that these are an evolution of last year’s priorities, rather than a wholesale change. We have had a good year, for sure; but as always, we will always strive to do more.  

Please get in contact with us at regulation@statistics.gov.uk to let us know your thoughts and questions about these priorities, or if you would like a wider discussion with us.

Weights and measures: how consultations relate to OSR’s role

In our latest blog, Director General Ed Humpherson responds to concerns raised with OSR regarding the recent consultation by Department for Business, Energy & Industrial Strategy on the Choice on units of measurements: marking and sales.

This blog was amended on 4 October 2022 to provide further clarity about question design

Since the start of the Government’s consultation on weights and measures, a lot of people have raised concerns about it with us at OSR. At the time of writing, we have received over 150 emails and many others have tagged us on Twitter regarding the recent consultation by the Department for Business, Energy & Industrial Strategy on the Choice on units of measurements: marking and sales that closed on 26 August 2022.

Before considering the specific question that has been raised with us, let’s first set out some background to how consultations relate to our role.

Consultations, done well, are a vital tool for government in developing policy. They can provide specific and qualitative insight to complement evidence from other sources like official statistics. They also help Government to understand different perspectives, foresee possible consequences and gather expert advice on implementation.

Our remit focuses on statistics. We set the standards for how Government collects and uses statistics. Consultations are relevant to our work in two senses. First, consultations can often draw on statistics, for example to illustrate the scale of an issue a policy intends to address. And consultations can also create statistical evidence – for example, the number of people responding to a particular question. In these senses, consultations are relevant to our work as OSR.

Turning to this particular consultation, the aim was to identify how the Government can give more choice to businesses and consumers over the units of measurement they use for trade, while ensuring that measurement information remains accurate.

However, when the consultation was launched, many stakeholders raised concerns surrounding the consultation questions and in particular question 3a. The question was as follows:

If you had a choice, would you want to purchase items (i) in imperial units? or (ii) in imperial units alongside a metric equivalent.

There was no option to select an outcome without imperial units at all. People felt that this was a potentially biased way of collecting information on public views on changes to measurement.

Given the concerns raised with us, we approached BEIS as the Department conducting the consultation. We wanted to understand the reasons for this question. They advised us that UK law currently requires metric units to be used for all trade purposes, with only limited exceptions. The purpose of the consultation was to identify how they can give greater choice to businesses and consumers over the units of measurement they use to buy and sell products. BEIS did also say that respondents had multiple opportunities to give a range of views through the consultation, and that all responses would be carefully considered.

This explanation was helpful. But it still doesn’t make this a good question design because it doesn’t offer a complete range of responses. For example including a ‘none of the above’ option would have allowed respondents the opportunity to express a preference for a unit of measurement other than imperial or imperial with metric. In the absence of such an option, the design in effect makes this an overly leading question.

So, what happens next? We would be surprised to see the evidence drawn from this specific question being used in any kind of formal quantitative way. If the results of this poorly designed question were presented in statistical terms (x% of people said…etc), then this would represent the generation of a statistical claim from a consultation. And in our view it would be potentially quite misleading as a representation of levels of support for any proposed policy change.