Data in debate: The role of statistics in elections

In our latest blog, our Head of Casework and Director General set out the guidance and support available for navigating statistics during an election campaign, and our role in publicly highlighting cases where statistics and data are not published or presented in a misleading way.

Intelligent transparency is something we talk about a lot in OSR. It involves taking an open, clear, and accessible approach to the release and use of data and statistics by default. It’s something we care about deeply, as public confidence in publicly quoted statistics is best enabled when people can verify and understand what they hear.

Taking a transparent approach by default will be particularly important during the upcoming general election campaign, where statistics will likely play a role in informing decisions made by the electorate but opportunities for governments to publish new analysis will be restricted. This is because in the weeks leading up to an election, known as the pre-election period, the Cabinet Office and Devolved Administrations set rules which limit public statements or the publishing of new policies and outputs.

Official statistics are unique in this respect as routine and preannounced statistics can continue to be published during this time, in line with the Code of Practice for Statistics. However, given that the pre-election ushers in a period of public silence for most government department activity, the publication of new information should be by exception. Any public statements made during the pre-election period should only refer to statistics and data that are already in the public domain to ensure that the figures can be verified and to avoid the need to publish new figures.

Part of our role as a statistics regulator is to promote and safeguard the use of statistics in public debate. We do not act to inhibit or police debate, and we recognise that those campaigning will want to draw on a wide range of sources, including statistics, to make their case for political office. Nevertheless, we will publicly highlight cases where campaigning parties have made statements that draw on statistics and data that are not published or presented in a misleading way.

Our interventions policy guides how we make these interventions, but we recognise that election campaigns require particularly careful judgement about when to intervene. This is why we’ve published our Election 2024 webpage, which brings together our guidance and support on election campaigns. This includes new guidance on the use of statistics in a pre-election period for government departments which sets out our expectations for how they should handle cases where unpublished information is referred to unexpectedly.

Reacting to misuse is not our only tool. This election, we want to do more up front to help people navigate through the various claims and figures thrown about during an election. This is why we are launching a series of explainers on key topics that will cover what to look out for and the common mistakes in public statements that we have seen through our casework across topics which are likely to feature in an election campaign.

We are also working in partnership with other organisations and regulators whose vision is aligned with ours and who support the good use of evidence in public debate. Our hope is that as a collective, we can contribute to the effective functioning of the election campaign.

We are not an all-purpose fact-checking organisation, nor are we the regulator of all figures used in public statements. However, while we can’t step into every debate, we will take all the necessary steps we can to ensure that the role of statistics in public debate is protected and that the electorate is not misled.

Anyone can raise a concern about the production or use of statistics with us. You can find out more about our remit and how to raise a concern with us by visiting our casework page.

 

What do young people think about statistics?

In our latest blog, Head of External Relations, Suzanne Halls, explores what young people think about statistics, following a chance encounter on a train…

It’s common to hear two claims: that young people are disengaged with policy; and that people of all ages are disengaged with statistics and data, and have low levels of statistical literacy. In OSR, we are sceptical about both claims – especially the claim that people are disengaged with statistics. We know this from our casework, which features a wide range of people raising questions about the use of statistics. And we know it from our wider observations about the social life of statistics, not least during the pandemic.

Sometimes, though, it’s good to substantiate general opinions with on-the-ground evidence. In this context, it’s a good idea for OSR to test our broadly positive take on statistics and data in society with what people actually say and do.

And this is exactly what happened to me earlier this year. I was on a train, and I overheard a group of young people talking about the importance of statistics and data. I got talking to one of them, called Gilbert, who is a student studying his GCSE’s from Hertfordshire.

I was struck by his enthusiasm for data and statistics, and I wanted to get a fuller read out from him of what he thought. So I was delighted when he agreed to have a further chat with me about how he understands and uses numbers – and I’ve set out his responses to my questions below. They speak for themselves, I think, so I’ve set them out more or less as he said them.

Why are you interested in statistics?

I like statistics because they give a comprehensive view of problems and help you work out solutions and predictions.

How do you think statistics help us?

I think that statistics are very important in the modern world because they act as the backbone of the economy and government decisions. They are also the way most data is presented in professional settings.

What are the benefits of statistics for young people?

Good statistics have the benefit of letting us completely understand the world we are going into and help us work out ways to improve it through technology and engineering.

Do young people need more of a say on data collection and use?

Yes, I think that teenagers and children need to be taught more about what is being taken from them when they accept ‘cookies’. At the moment I feel companies can put anything in their terms of service and get away with it and I feel we need to regulate this more heavily.

What questions do you think official statistics should be asking young people?

I think that official statistics should be asking more questions about activity with technology and more in-depth questions about climate change, I feel that if surface level questions are asked there is less chance of the young person engaging.

How could statistics producers across government engage more with younger audiences?

Nowadays the younger generation interact more over social media like Snapchat or Tik Tok. This means that less young people are seeing conventional ads on TV. If statistics produces condensed the facts into short entertaining videos and put them on platforms such as Tik Tok there is a high chance more young people would engage with them.

Do you think you are taught enough about statistics in schools?

No, I feel that we need to be taught more about statistics to be able to interact and understand the world that older generations are leaving us with, such as the way politics are run at the moment and more importantly how to try and stop or preferably reverse climate change.

How do you interact with data?

I don’t really have a favourite way of interacting with data, I prefer dashboards with multiple graphs but am not overly fussed.

Where do you go for statistical information?

At the moment I use the Hustle to find business and political related statistics and sources such as the guardian for world statistics. However, the problem is there arn’t that many sites for finding out statistics and lots of people don’t try and find them.

Thanks so much for sharing your views. What is your favourite statistical fact?

I am absolutely fascinated by the fact that the human eye blinks on average 4,220,000 times a year!


As I said, I think this speaks for itself; and although it’s only one example, it does provide an inspiring example to rebut the idea that young people are disengaged with statistics and data.

And this evidence is just a taster – we recently published a think piece and research report on the concept of statistical literacy and the importance of communicating effectively, which is well worth a read!

At the Office for Statistics Regulation we are interested in hearing views from everyone on statistics and how they are used, we encourage you follow our twitter, read our newsletter, visit our website and contact us with any thoughts or questions you might have.

An army of armchair epidemiologists

Statistics Regulator, Emily Carless explores the work done to communicate data on Covid-19 publicly, from inside and outside the official statistics system, supporting an army of armchair epidemiologists. 

In 2020, our director Ed Humpherson blogged about the growing phenomenon of the armchair epidemiologist. Well, during the pandemic I became an armchair epidemiologist too. Or maybe a sofa statistical story seeker as I don’t have an armchair! Even though I lead our Children, Education and Skills domain rather than working on health statistics, I couldn’t help but pay close attention to the statistics and what they could tell me about the pandemic

At the micro-level I was looking at the dashboards on a near daily basis to understand the risks to myself, my family, my friends and my colleagues. I was watching the numbers of cases and hospitalisations avidly and looking at the rates in the local areas of importance to me. I felt anxious when the area where my step-sister lives was one of the first to go the new darkest colour shortly before Christmas 2021, particularly as my dad and step-mum would be visiting there soon afterwards. Earlier in the pandemic, once we were allowed to meet up, my mum and I had used these numbers to inform when we felt comfortable going for a walk together and when we felt it was better to stay away for a while to reduce the risk of transmission. These statistics were informing real world decisions for us.

At a macro-level I was also very interested in the stories the statistics were telling about the pandemic at a population level. The graphs on the dashboards were doing a great job of telling high level stories but I was also drawn to the wealth of additional analysis that was being produced by third parties on twitter. My feed was full of amazing visualisations that were providing additional insight beyond that which the statistical teams in official statistics producer organisations had the resources to produce.

As we highlighted in our recent State of the Statistical System report, the COVID-19 dashboard has remained a source of good practice. The dashboard won our Statistical Excellence in Trustworthiness, Quality and Value Award 2022. The ability for others to easily download the data from the COVID-19 dashboard to produce visualisations and bring further insight has been a key strength. I wanted to write this blog to further highlight the benefits of making data available for this type of re-use. I think Clare Griffith’s (lead for UK COVID-19 dashboard) tweet back in February sums it up perfectly. In response to one of the third-party twitter threads she said ‘Stonking use of dashboard data to add value. Shows what can be done by not trying to do everything ourselves but making open data available to everyone.’ 

Here are a couple of my favourite visualisations (reproduced with permission). 

Like Clare, I really like Colin Angus’ (@VictimOfMaths) tapestry by age. It shows the proportion of confirmed Covid-19 cases in England by age group and how that changed during the pandemic. I also liked the way the twitter thread explained the stories within the data and that they made the code available for others. 

I also liked Oliver Johnson’s (@BristOliver) case ratio (logscale) plots. Although the concept behind them may have been complex, they told you what was happening with cases/ hospitalisations. The plot shows the 7-day English case ratio by reporting date on a log scale using horizontal lines to show where the case ratio showed a two or four week doubling or halving.

There was great work being done to communicate data on Covid-19 publicly from outside the official statistics system, supporting an army of armchair epidemiologists. This demonstrates the changing statistical landscape of increased commentary around official statistics, which we referenced in the latest State of the Statistical System report, at its best. Much of this was made possible by the Covid-19 dashboard team making the data available to download in an open format through an API with good explanations and engaging on social media to form a community around those data. We hope that this approach can be replicated in other topic areas to maximise the use of data for the public good.

A model’s journey to Trustworthiness, Quality and Value

OSR’s Head of Data and Methods, Emily Barrington explores the work taken to deploy artificial intelligence and statistical modelling within government while adhering to a high regulatory standard

Thinking back to when I joined the Office for Statistics Regulation (OSR) in late 2019 (just before the pandemic hit), Artificial Intelligence (AI) and other complex statistical modelling was still in its infancy within government. There were pockets of work being done here and there and guidance was being produced but there was nothing public facing, and nothing to help analysts understand how to organise model development that could help instil trust from the public’s perspective.

At this point you may be thinking, but aren’t you the regulator of statistics? Why are you thinking about AI models? Well, two reasons. Firstly, it comes down to definition. AI is the new buzzword but when you strip it back to its core components it’s really just complex statistical modelling (albeit on a larger scale and with bigger computers!) so any guidance that would apply to statistical modelling will also apply to AI and vice versa. Secondly, helping build public trust is in our ethos and, when it comes to AI use within government, the outputs of such models often have a public impact – be it directly or indirectly through policy change.

Not long after I joined I started looking at how we, at OSR, could have a voice in this area to champion best practice through our pillars of Trustworthiness, Quality and Value (TQV).

The pandemic effect

If anything, the need for data and insight throughout the pandemic helped break some of the barriers that had been stopping AI/complex modelling taking off within government. Things like data sharing and public acceptance of use has generally been greater during the pandemic which may have been driven by the need to help save lives. This drive, however, sometimes led to misjudgement and this is what happened when awarding exam grades in 2020 and led to our review on ‘Securing public confidence in algorithms’. This was the first time OSR had worked on anything related to algorithms so specifically and the lessons that were drawn from the work resonated well, people thought we had something to give – and we agree with them!

This work also made us think outside the box when it came to the Code of Practice for Statistics (The Code). After all, the model used to award exam results was not official statistics, neither was it AI for that matter, but the Code still helped us when making our judgements.

Back to championing best practice

By the time the review on awarding exam results was published, we had already started putting down some thoughts on how the code could be applied when using models and later that year our alpha version of ‘Guidance for Models: Trustworthiness, Quality and Value’ was published. It was published as alpha because we wanted to get as much feedback as possible before promoting more widely – this was our first time in this space after all. We also felt there might be a better way to present the messages but needed some further thought and input from the wider analysis and data science communities.

The pillars of Trustworthiness, Quality and Value (TQV)

Since the publication of the alpha guidance, we have come a long way in thinking about what the Code and its pillars really embody when broken down and have matured our thinking on statistical modelling. Today we published our finalised version of ‘Guidance for models: Trustworthiness, Quality and Value’ which takes the TQV messages and brings them to life for model planning and development. We have softened our focus on the Code principles since the alpha version and taken a step back to concentrate on the most important Code considerations for public good of models. This came from feedback from analytical and data science communities that the messages are stronger when not linked to the Code specifically. We have also incorporated all the lessons from our review on Securing public confidence in algorithms’ and our follow-up case study on QCOVID.

We now have a guidance which we believe embodies what is needed to help build public confidence and trust when deploying statistical models. But I guess the proof is in the pudding…

Thoughts?

If you have any feedback, thoughts or use cases where you found our guidance helpful please do not hesitate to contact OSR Data and Methods – we’d love to hear from you!


Related links

Guidance for Models: Trustworthiness, Quality and Value

The role of official statistics in evaluation | Insight project

Insights into the use of official statistics in policy evaluation

Grace Pitkethly, Insights and Evaluation Manager at OSR writes about how the use of official statistics can improve the already essential tool of evaluation to ensure the effective functioning of government.

Evaluation is an essential tool to ensure the effective functioning of government. In the words of the Evaluation Task Force: “Government departments are expected to undertake comprehensive, robust and proportionate evaluation of their policy interventions in order to understand how government policies are working and to ensure the best value of public money”.  

In recent years there’s been good progress in setting up structures and providing guidance from the top-down to help departments conduct good quality policy evaluations. We fully support this at OSR – our Director General, Ed has written about how good evaluation supports (and is supported by) the Code of Practice for Statistics.  

At OSR, we also want to help from the bottom-up – enabling the people conducting evaluations to do this as effectively as possible using statistics and data that serve the public good. Supporting policy evaluation at different levels, whether cross-government, departmental, or team-level, helps enable efficient and good quality evaluations. One way that we can do this is supporting the use of official statistics in policy evaluations, focusing on the value of the statistics to support society’s need for information from evaluations. NAO guidance based on the Magenta Book says that existing data from administrative and monitoring systems, or large-scale, long-term surveys should be considered first as data sources. But is that actually the case? 

We carried out a quick exploration of how official statistics and their underlying datasets are currently used in evaluations and how OSR can support statistics producers to make their statistics more valuable for evaluations. Through our conversations, we heard about a variety monitoring and evaluation programmes which draw on official statistics, in spite of limiting our scope due to time and resource constraints. 

Crucially, we did not identify any evaluations which rely significantly on published official statistics alone. This doesn’t mean that examples don’t exist – but none were raised after speaking to five OSR regulators, individuals in eight policy departments involved in carrying out or enabling evaluation, and five other teams across ONS and Cabinet Office. 

We found that the most common way official statistics are used in evaluation is through the analysis and linkage of data which underpin official statistics. In some cases, the data which underpin official statistics are linked to, or analysed alongside, primary data collections designed specifically for the evaluation. This is to overcome barriers such as data gaps (where official statistics are not produced for all outcomes of interest) and granularity (where official statistics do not break down into the geography or group of interest). One example is DLUHC’s Supporting Families programme evaluation. This linked together existing data sources from multiple departments (many of which feed into official statistics) and Local Authority data, with additional primary data collection. 

However, our conversations highlighted other potential barriers to linking data in this way like inability to access data held by other departments securely, difficulty cultivating relationships within and across departments to get buy-in and data matching issues arising from lack of harmonisation. These barriers are not solely at departmental level but also individuals conducting or involved in evaluations. This shows the importance of combining top-down cross-government evaluation guidance with a bottom-up approach, starting directly with the people producing and using the data, to create the right conditions for successful evaluations. 

Although these are high-level findings, they highlight key questions that OSR can explore to support the use of official statistics in evaluation:  

  • Do statisticians consider key policy questions and the data needs of evaluation when developing statistical outputs? And do OSR regulators support them to do this? 
  • Are the data suitable for linking to other datasets? 
  • Is there effective analytical leadership in place to support finding, accessing and sharing official statistics for evaluation purposes? 

These are just some of the areas that we will explore arising from this work. What’s certain is that evaluation is only growing in importance and visibility across government and OSR can play a role in its success.

How OSR secures change

Why do organisations do the things they do? Strip away all the language of business plans and objectives and strategies, and what it often boils down to is wanting to achieve some kind of positive impact.

It’s important to remember that as we launch our latest business plan for 2022/23. In this blog, rather than highlight specific outputs and priorities, I want to talk more generally about how OSR, as a regulator, secures positive change.

By change, we mean regulatory work or interventions that ensure or enhance statistics serving the public good. There are basically two ways in which our work leads to statistics serving the public good. Our work can:

  • secure a positive change in the way statistics are produced and/or presented; and/or
  • make a direct contribution to public confidence and understanding.

OSR clearly does secure impact. In response to our reports, producers of statistics make changes and improvements to their statistics and other data. Statistics producers also use the presence of OSR in internal debates as a way of arguing for (or against) changes – so that OSR casts a protective shadow around analytical work. OSR can also secure changes to how Ministers and others present data. And OSR also achieves impact through getting departments to publish previously unavailable data. In all these ways, then, OSR secures impact, in the sense of statistics serving the public good to a greater extent.

In terms of formal statutory powers, the main lever is the statutory power to confer the National Statistics designation. This in effect is a way of signalling regulatory judgement. The regulatory role is to assess, i.e. to review and form a judgement and on the basis of the judgement, that National Statistics designation is awarded. We have recently been reviewing the National Statistics designation itself.

A further power in the Statistics and Registration Service Act is the power to report our opinion. Under section 8 of the Act, we are expected to monitor the production and publication of official statistics and report any concerns about the quality of any official statistics, good practice in relation to any official statistics, or the comprehensiveness of any official statistics.

These statutory powers do not create influence and drive change by themselves. We need to be effective in how we wield them. We have to supplement them with a powerful vision, good judgement, and effective communication.

The power of ideas

The most significant source of influence and impact is the power of the ideas that underpin the Code of Practice for Statistics. The Code is built on the idea that statistics should command public confidence. It is not enough for them to be good numbers: collected well, appropriately calculated. They must have attributes of trustworthiness, quality and value.

The power of these ideas comes from two sources. First, they are coherent, and in the Code of Practice, are broken down into a series of increasingly granular components – so the ideas are easy for producers to engage with and implement. Second, they have enormous normative power – in other words, trustworthiness, quality and value represent norms that both statisticians and, senior staff want to be seen to adhere to, and wider users want to see upheld.

These powerful, compelling ideas represent, then, something that people want to buy into and participate in. A huge amount of OSR’s impact happens when OSR is not even directly involved – by the day-to-day work of statisticians seeking to live up to these ideas and the vision of public good that they embody.

Judgements

OSR’s work begins with the ideas embodied in the Code, which we advocate energetically, including through our crucial policy and standards function. The core work for OSR actually consists of making judgements about trustworthiness, quality and value, in multiple ways:

  • our assessments of individual sets of statistics, where we review statistics, form judgements, and present those judgements and associated requirements to producers and then publicly – either through in-depth assessment reports, or by quicker-turnaround reviews of compliance;
  • our systemic reviews, which address issues which cut across broader groups of statistics, and which often focus on how of we statistics provide public value, including highlighting gaps in meeting user needs;
  • our casework, where we make judgements about the role statistics play in public debate – whether there are issues with how they are used, or how they have been produced, which impact on public debate; and
  • through our broader influencing work, including our policy work, and our research and insight work streams.

These judgements are crucial. Our ability to move fluidly using different combinations of our regulatory tools is important to securing impact. It allows us to follow up where the most material change is required and extend our pressure – and support – for change.

We are able to make judgements primarily through the capability of OSR’s people. Their capability is strong, and we depend on their insight, analysis, judgement and ability to manage a range of external relationships.

Communication and reach

It is not enough for us to make good judgements. We need to make sure that any actions are implemented – in effect, that our judgements radiate out into the world and lead to change.

There are three main channels for achieving this reach:

  • Relationships with producers: our relations with producers are crucial. Heads of Profession and the lead statisticians on individual outputs are the key people for making improvements; their buy-in is crucial.
  • Voice and visibility: having a public voice magnifies our impact. It ensures that policymakers are aware of what we do and understand that our interventions can generate media impact.
  • Wider partnerships: while our direct relationships with producers, and our public voice, can also create sufficient leverage for change, we also draw on wider partnerships. For example, credible external bodies like the RSS and Full Fact can endorse and promote our messages – so that producers face a coalition of actors, including OSR, that are pushing for change.

And we put a lot of emphasis on the views, experiences and perspectives of users of statistics. Almost all our work involves engaging with users, finding out what they think, and seeking to ensure producers focus on their needs.

In that spirit, we’d be very keen to get reactions on our own business plan – from all types of users of statistics, and also from statistics producers.

Conclusion 

Business plans should not simply be a list of tasks. It is also important to be clear on how an organisation delivers, how individual projects and priorities help achieve a positive impact. In OSR’s case, achieving this impact involves the power of ideas, good judgement and effective reach.

With this clarity around impact, our business plan (and work programme) comes to life: more than just a set of projects, it’s a statement of ambition, a statement of change.

But the business plan is also not set in stone. We are flexible and willing to adapt to emerging issues. So if there are other areas where we should focus, or other ways we can make a positive difference, we’d really welcome your feedback.

Exploring the value of statistics for the public

In OSR (Office for Statistics Regulation), we have a vision that statistics should serve the public good. This vision cannot be achieved without understanding how the public view and use statistics, and how they feel about the organisations that produce them. One piece of evidence that helps us know whether our vision is being fulfilled is the Public Confidence in Official Statistics (PCOS) survey.

The PCOS survey, which is conducted independently on behalf of the UK Statistics Authority, is designed to capture public attitudes towards official statistics. It explores trust in official statistics in Britain, including how these statistics are produced and used, and it offers useful insights into whether the public value official statistics.

When assessing the value of statistics in OSR, two of the key factors we consider are relevance to users and accessibility. The findings from PCOS 2021, which have been published this week, give much cause for celebration on these measures, while also raising important questions to explore further in our Research Programme on the Public Good of Statistics.

Do official statistics offer relevant insights?

PCOS 2021 shows that more people are using statistics from ONS (Office for National Statistics) now compared to the last publication (PCOS 2018). In the 2018 publication of the PCOS, 24% of respondents able to express a view said they had used ONS statistics, but this has now increased to 36%. This increase may be due to more people directly accessing statistics to answer questions they have about the COVID-19 pandemic. In our Research Programme, we are interested in knowing more about this pattern of results and also understanding why most people are not directly accessing ONS statistics.

Are official statistics accessible?

PCOS 2021 asked respondents if they think official statistics are easy to find, and if they think official statistics are easy to understand. These questions were designed to capture how accessible official statistics are perceived to be by members of the public. Most respondents able to express a view (64%) agreed they are easy to find. This is an important finding because statistics should be equally available to all, without barriers to access. Most respondents able to express a view (67%) also agreed that statistics were easy to understand, suggesting that two thirds of respondents feel they can understand the statistics they want to.

However, respondents who were aged 65 or older were least likely to agree with these two statements. Statistics serving the public good means the widest possible usage of statistics, so this is an important finding to explore further to ensure that older respondents are able to engage with statistics they are interested in. In our Research Programme, we will work to identify what barriers might be causing this effect and whether there are other groups who feel the same way too.

The value of statistics

Considering how the value of statistics can be upheld, respondents in PCOS 2021 were asked to what extent they agree with the statement “it is important for there to be a body such as the UK Statistics Authority to speak out against the misuse of statistics”. The majority (96%) of respondents able to express a view agreed with this statement, with a similar number (94%) agreeing that it is important to have a body who can ensure that official statistics are produced free from political interference. While we are cautious about putting too much weight on these two questions in the survey, these findings may at the very least indicate the public value the independent production of statistics, as well as challenges to the misuse of statistics.

In conclusion, PCOS 2021 suggests that statistics are relevant and accessible to many members of the public, but there are still some who do not access statistics or consider them easy to find or easy to understand. While the findings of PCOS 2021 offer a wealth of important information, and demonstrate the value of official statistics, it is clear there are still a lot of questions to explore in our Research Programme. We will continue our work to understand what statistics serving the public good means in practice, guided by knowledge from PCOS 2021.

Acting against the misuse of statistics is an international challenge

Acting against the misuse of statistics is an international challenge

That was the message of the event on 14 March hosted by the UK Statistics Authority on the prevention of misuse of statistics, which formed part of a wider campaign to celebrate the 30th anniversary of the UN fundamental principles.

My fellow panellists, Dominik Rozcrut and Steven Vale, and I discussed a range of topics, from addressing statistical literacy to regulation best practice, from memorable examples of misuse, to the cultural differences that affect public trust internationally. Although we all had different experiences and approaches, it was clear that there was a common passion for the truth and statistics that serve the public good.

The event had all the merits of on-line meetings that we’ve all become familiar with: lots of people able to join from a wide range of locations and lots of opportunities for people to engage using live chat functions.

Perhaps some people find it less intimidating to type a question into an app than to raise their hand in a crowded room, because there were lots of interesting questions asked and it was clear that the issue of preventing misuse of statistics generated a lot of interest and passion from the audience as well as the panellists

But the event also brought with it a new kind of frustration to me as a speaker: there were too many questions to answer in the time available and I felt bad that we couldn’t answer all the questions that people typed in.

So, in an attempt to rectify that, I’ve decided to use this blog to address the questions that were directly for me that I didn’t answer in real time, and those which touched on the key themes that came across during the Q&A.


“Who are the best allies in preventing misuse or building statistical literacy outside of stats offices? Are there any surprising allies?”

There are obvious allies for us, like Full Fact and the Royal Statistical Society.

I also like to give a shout out to the work of Sense about Science. Their work highlights that there is a huge amount of interest in evidence, data and statistics – and that a simple model of experts versus “the public” is far too simplistic.

There are a huge range of people who engage with evidence: teachers, community groups, people who represent patients, and a lot of others. These people, who want to find out the best evidence for their community, are fantastic allies.

And I’d also pick out a surprising ally: politicians. In our experience, politicians almost always are motivated to get it right, and not to misuse statistics, and they understand we are making the interventions we are making. So perhaps they are the ally that would most surprise people who look at our work.

“How important is statistical literacy among the media and general public in helping prevent the misuse of statistics?”

I think that having a sort of critical thinking skill is important. People should feel confident in the statistics that are published, but also feel confident that they know where to find the answers to any questions they have about them.

But equally, we need statistical producers to be better in how they communicate things like uncertainty, in a way that is meaningful for the public.

So rather than putting the responsibility of understanding solely on the user, and just talking about statistical literacy, let’s also talk about producers’ understanding – or literacy if you will – about public communication.

“You have mentioned that sometimes the stats get misinterpreted because of the way they are presented – can you share some examples?”

My point here was that misinterpretation is a consequence of what producers of statistics do. One example we’ve seen frequently during the pandemic concerns data on the impact of vaccines. It’s been the case that sometimes people have picked out individual numbers produced by public health bodies and highlighted them to argue their case about vaccines. Producers need to be alive to this risk and be more willing to caveat or re-present data to avoid this kind of misinterpretation.

“What are your views on framing statistics for example 5% mortality rate vs 95% survival rate? Both are correct but could be interpreted very differently.”

I find it impossible to answer this question without context, sorry! I definitely wouldn’t say that, as an absolute rule, one is right and the other is wrong. It depends on the question the statistician is seeking to inform. I can’t be more specific than that in this instance.

However, to avoid possible misinterpretation, we always recommend that producers use simple presentation, with clear communication about what the numbers do and do not say.

“How do we balance freedom of expression with the need to prevent the abuse and misuse of statistics?”

We don’t ban or prohibit people from using statistics, so in that sense there’s no barrier to freedom of expression. But we do want to protect the appropriate interpretation of statistics – so our interventions are always focused on clarifying what the statistics do and don’t say, and asking others to recognise and respect this. It’s certainly not about constraining anyone’s ability to express themselves.

“What’s the most damaging example of misuse of statistics that you’ve come across in your career?”

Here I don’t want to give a single example but give a type of misuse which really frustrates us. It’s when single figures are used as a piece of number theatre, but the underlying dataset from which the single figure is drawn are not available, so it’s not possible for the public to get to understand what sits behind the apparently impressive number. It happens a lot, and we are running a campaign, which we call Intelligent Transparency, to address it.

“Can you give us some more insight into how you steer clear of politics, media frenzies, and personalities?”

We always seek to make our intervention about clarifying the statistics, not about the arguments or policy debates that the statistics relate to. So we step in and say, “this is what the statistics actually say” and then we step out. And we don’t tour the news studios trying to get a big name for ourselves. It’s not our job to get media attention. We want the attention to be on what the statistics actually say.


I hope these answers are helpful, and add some context to the work we do to challenge the misuse of statistics. I also hope everyone reading this is going to follow the next series of events on the UN Fundamental Principles of Official Statistics.

The next round of events is moving on from preventing misuse, to focusing on the importance of using appropriate sources for statistics. Find out more about them on the UNECE website.

Time for a change: amending the Code of Practice to allow alternative release times for official statistics

On 5 May 2022, we’ll be making our first change to our Code of Practice for Statistics since its major overhaul in 2018. We’ll be changing two practices, which relate to the release times of official statistics, giving producers the opportunity to release statistics at times other 9:30am, where they and we judge that this clearly serves the public good. These changes are not being undertaken lightly: this blog explains how we have reached our decision and what this change will mean for producers and statistics in the future.   

The coronavirus pandemic underscored the importance of ensuring statistics are shared in ways that reassure users about their integrity and independence: a big part of this is ensuring equality of access and that statistics are available to everyone at the same time. Since its inception, our Code has stated that official statistics should be released to all users at 9:30am, on a weekday (practice T3.6). However, during the pandemic the Director General for Regulation granted several exemptions for release at times other than 9.30am – to key economic statistics and to some COVID-19 related statistics. 

These decisions led us to review two release practices in the Orderly Release principle of the Code of Practice for Statistics. As part of this, we ran a public consultation between September and December 2021 to consider whether greater flexibility in release arrangements should be formalised within the Code of Practice for Statistics. Today we have published the findings from our consultation, which have helped inform our decision to revise these two practices. 

For most official statistics, producers, supported by several other stakeholders, told us they believe a standard release time of 9.30am best serves the public good: it enables release in an open and transparent manner, gives consistency across the GSS and protects against political interference. However, producers also said that there are some situations in which having an alternative release time can be beneficial to the public good. They would like to be able to nominate an alternative time where they see evidence of the benefit. For example, this could be where users can be better reached through targeting a statistical release that can then be promoted through the media. Enabling statisticians to speak directly to journalists and the public about their statistics has been a powerful means to ensure the statistics are well used during the pandemic.  

Considering producer views and our consultation findings, we are amending principle T3.6 so that an alternative release time for statistics can be requested where the head of profession (Statistics HOP) believes that it will enable the statistics to better serve the public good. To ensure that producers remain orderly and transparent about release arrangements and can maintain statistical independence, this alternative time request will still need to be approved by the Director General for Regulation. All approvals will be listed in our new alternative release times table. The alternative standard time is to be used consistently by the producer for those statistics and should be announced in advance.   

We have outlined our expectations of statistics producers seeking alternative release times in our explainer of the public good test. We highlight that considering the public good means understanding the wider nature of the use of the statistics and how the public access statistics – whether direct from producers, via the media or other means – and how the release of statistics supports both debate and decision making. Essentially, it means that producers should consider the way their statistics inform and benefit society. Being open about key decisions around release arrangements is central to maintaining the confidence of users in official statistics. To this end we are asking the Office for National Statistics to engage with its users and other stakeholders, to explain its release approach for several key economic statistics that it already publishes at 7am, and how it is balancing competing demands. 

As ever for OSR, a fundamental question driving this review has been, “how can we best serve the public good?” Giving producers the opportunity to decide on alternative release times, based on their own assessment of how their statistics best serve the public good, helps us meet this ambition, which is central to both our vision as a regulator and to the Government Statistical Service’s 2020-2025 strategy. 

Please see our new policy on alternative release times for further information. You can also read more about the findings from the consultation. The changes to the Code will come into effect from 5 May 2022. If you have any questions, please contact us via regulation.support@statsitics.gov.uk.

Guest blog: Improving reporting and reducing misuse of ethnicity statistics

Richard Laux, Deputy Director, Data and Analysis, at the Equality Hub discusses his team’s work in improving reporting and reducing the misuse of ethnicity statistics in our latest guest blog, as part of the 30th anniversary of the United Nations’ Fundamental Principles of Official Statistics.

In my role as the Head of Analysis for the Cabinet Office’s Equality Hub I am in the privileged position of leading the team that analyses disparities in outcomes between different ethnic groups in the UK.

The reasons for disparities between ethnic groups are complex, and include factors such as history, relative levels of deprivation, the different age profile of some ethnic groups as well as many other factors. Despite the complexity of the issues, my team and I do all we can to prevent misuse of the data and help ensure that robust and clearly explained data are furthering the debate on race and ethnicity, which is an emotive topic for many people in this country.

My team’s responsibility for this is firmly rooted in the UN Principle 4 of preventing the misuse of statistics. We do this in a number of ways that align with this principle.

One way we do this is through bringing several analyses together to paint a broad-based picture of a topic of interest. For example, when supporting the Minister of State for Equalities on her reports on progress to address COVID-19 health inequalities we synthesised a large body of research describing the impact of the pandemic on ethnic minority groups. Much of this work involved my team reconciling and reporting on different sources and drawing robust conclusions from different analyses that didn’t always entirely agree.

A second way we try to prevent misuse of data is through the clear presentation of statistics, an example being Ethnicity facts and figures. This website was launched in October 2017 and since then it has been a vital resource to inform the debate about ethnicity in the UK. It gathers together government data about the different experiences of the UK’s ethnic groups and is built around well-established principles, standards and practices for working with data like the Code of Practice for Statistics.

We try to make the content on the website clear and meaningful for people who are not experts in statistics and data. It also contains detailed background information about how each item of data was collected and analysed to help those users with more interest or expertise in statistics draw appropriate conclusions.

The Commission on Race and Ethnic Disparities report recommended that RDU lead work to further improve both the understanding of ethnicity data and the responsible reporting of it (and thereby helping to prevent its misuse). As part of this work, we will consult on how to improve the Ethnicity facts and figures website, including whether we increase the amount of analysis on the site to help users better understand disparities between ethnic groups. Some of this might be in a similar vein to Office for National Statistics (ONS) work during the pandemic on ethnic contrasts in deaths involving the COVID-19. This modelling work showed that location, measures of disadvantage, occupation, living arrangements, pre-existing health conditions and vaccination status accounted for a large proportion of the excess rate of death involving COVID-19 in most ethnic minority groups.

Of course, there can be some difficulties with data that might lead to its misuse: datasets can vary greatly in size, consistency and quality. There are many different ways that ethnicity is classified in the datasets on Ethnicity facts and figures, and these classifications can differ widely depending on how and when the data was collected. For example, people might erroneously compare the outcomes for an ethnic group over time thinking it has remained the same whereas in fact it has changed; this might happen if someone is looking at data for the Chinese, Asian or Other groups over a long time period, as the Chinese group was combined into the ‘Other’ ethnic group in the 2001 version of the aggregated ethnic groups, but combined into the Asian group in the 2011 version of the aggregated ethnic groups in England and Wales.

We also try to minimise misuse and misinterpretation by promoting the use of established concepts and methods including information on the quality of ethnicity data. Our quality improvement plan and significant contribution to the ONS implementation plan in response to the Inclusive Data Taskforce set out our ambitions for improving the quality of ethnicity data across government. We will also be taking forward the Commission for Race and Ethnic Disparity’s recommendation that RDU should work with the ONS and the OSR to develop and publish a set of ethnicity data standards to improve the quality of reporting on ethnicity data. We will consult on these standards later this year.

Finally, we raise awareness and knowledge of ethnicity data issues through our ongoing series of published Methods and Quality Reports and blogs. For example, one of these reports described how the overall relative stop and search disparity between black people and white people in England and Wales can be misleading if geographical differences are not taken into account.

We have significant and ambitious programmes of analysis and data quality work outlined for the future. I would be grateful for any views on how we might further help our users in interpreting ethnicity data and preventing misuse.