Renewed momentum? The Statistics Assembly one year on

One year ago today, the UK’s inaugural Statistics Assembly took place in London. The word ‘world-leading’ can be overused. But it is not out of place to describe an exercise in user engagement on an unprecedented scale.

The Assembly was an inspiring event. It had several hundred attendees, with many more online. It was a crucial step towards the UK statistical system becoming more open to its users. This spirit was exemplified by the speakers: they came from across the UK and internationally, and none of the main speakers was from the Office for National Statistics (ONS).

The statistics system can often be criticised for a lack of transparency in how it sets priorities. Users can sometimes feel that consultation is more about producers broadcasting what they are doing rather than listening. In this context, the openness of the Assembly felt significant, perhaps almost revolutionary.

One year on, what can we say about the way that the Assembly has influenced statistics in the UK?

In short, I am worried that there needs to be a refreshed injection of momentum into this process.

The Assembly: a step change in user engagement

The Assembly was one of the key recommendations made by the review of the Statistics Authority undertaken by Denise Lievesley in March 2024. Denise heard the concerns from users of statistics about a lack of meaningful engagement. She saw the Assembly as a mechanism for addressing this weakness.

The UK Statistics Authority (UKSA), in partnership with the Royal Statistical Society, organised the Assembly swiftly and effectively. It took place 10 months after the Lievesley review was published. It was energising, creative and open. And there was a clear follow-up: in March, the National Statistician’s Expert User Advisory Council, chaired by Professor David Hand, distilled the extensive material generated during the Assembly into a clear report setting four priorities:

  • Reinvigorate sustained and effective user engagement.
  • Ensure user needs for more-granular statistics are met (including small areas, urban/rural, sub-groups of society, under-represented groups and so on).
  • Commit to a significant scaling up in the use of administrative data.
  • Recognise the needs for UK-wide statistics and advocate for, and support, harmonised data where desirable.

So by March 2025, things looked good. The event had happened, it was a success, and a set of priorities had been identified.

A summer of change – and some progress

And then events took a different turn, with the publication of two reviews looking at long-standing ONS problems. First, in April 2025, we in the Office for Statistics Regulation (OSR) published a review of the ONS’s economic statistics, which highlighted deep-seated quality concerns on economic statistics. Then, in June, the Devereux review highlighted weaknesses in the ONS’s leadership, prioritisation and delivery. Since then there have been significant changes in the leadership of the ONS.

Over the last few months, under this new leadership, the ONS has been focusing on delivering an ambitious recovery plan, responding to OSR’s recommendation that it publish quarterly progress updates. It has done so with a commendable focus on openness, and with enhanced engagement with users of economic statistics.

The ONS is of course just one part of the UK statistical system, and it should not fall to the ONS alone to take forward the Assembly’s priorities. And there have been some important steps, by both the ONS and other statistics producers, to implement the Assembly’s recommendations, as set out on the UKSA website in December. Examples of progress include the publication of detailed information on the use of administrative data in the last census; the publication of a new approach to data sources; and a commitment to establish an online ‘trust centre’.

At OSR we have sought to progress the Assembly’s priorities. In particular, the new version of the Code of Practice for Statistics is more user-centric. In our day-to-day work, we challenge and support statistics producers to do better on user engagement. We have published a public involvement and engagement toolkit, which encourages producers to take a much wider lens on who they focus on when they undertake user engagement activities. Our push on intelligent transparency requires government departments to be open and proactive in making data available publicly, and our review of cross-UK comparability in June 2025 made systemic recommendations in line with the Assembly’s comparability theme.

The need to maintain momentum

There are lots of demands on the statistics system. The Assembly is just one of those, and it’s clear that all statistics producers are facing significant resource constraints.

But it’s hard to say that, one year on, the progress on the four priorities has been significant. This is recognised in the December update on the Authority website, which says that “We have not progressed development of the refreshed Authority user engagement strategy as quickly as we would have liked.”

But the Assembly’s priorities remain a powerful anchor for engagement with users of statistics, for two reasons:

Firstly, a lot of people committed time and effort to making the Assembly a success, in the expectation that it represented a substantial reset in how user engagement is thought about and delivered by UK statistics. It is important to realise the benefits of this commitment.

Secondly, there is a question of who statistics are for. In the course of 2025, there were two alternative versions of an answer to that question. The Assembly proposed the answer that statistics are for a broad, vibrant, engaged community of users. The Devereux review implied that the users who really count are the key institutions of the state drawing on economic statistics: HM Treasury, the Bank of England, the Office for Budget Responsibility – at least, for now.

Of course, both answers are correct. Statistics serve the institutions of the state and also a much broader range of users across society. An effective system holds these two sets of users in broad balance – recognising that statistics are for decision makers, but that ‘decision maker’ covers a very wide range of organisations and individuals in society.

Momentum regained?

The key point of this blog is simple. The Assembly represented a breakthrough in the way in which the statistics system opened itself up to its users.  However, it should not be regarded as a one-off – but as an ongoing process, whose momentum must be maintained.

Moreover, the process for appointing a new National Statistician is underway. The new National Statistician, leading across the entire UK statistical system, can inject renewed vigour into taking forward the Assembly’s recommendations.

One year on, my view is this:

Is there a risk of a loss of momentum? Yes.

Can it be regained? Absolutely.

Going beyond consultation to creative conversation about the Code of Practice

In this blog, Penny Babb, Head of Policy and Standards at the Office for Statistics Regulation, discusses her experience of refreshing the Code of Practice for Statistics.


 

As we’ve recently released the third edition of the Code of Practice for Statistics, I am keen to reflect on the experiences that have brought us to this point. I would also like to acknowledge the support of the Royal Statistical Society, and the many statistical producers and stakeholders who have inputted their ideas and views about the Code – thank you! Your contributions have challenged our thinking and enriched Code 3.0.

The long and winding road

In October 2023, we kicked off a review of the Code of Practice and asked the question – is it time to refresh the Code of Practice? Following a wide range of engagement activities, by February 2024, the answer was clear: Code 2.1 had served us well – it was solid, trusted and respected. But in the light of the shifting data landscape, technological advancements and mis/disinformation challenges, it was time to think about how the Code could evolve to meet these challenges. So, we prepared a draft Code 3.0 and invited feedback through a formal consultation, which ran from October 2024 to February 2025. And here we are now in November 2025, after further refinement of the draft Code to address the feedback raised with us, with the latest edition.

Ongoing dialogue

Looking back to two years ago, when the Code 3.0 project kicked off, I’m amazed at how far we have come. We have had so many fruitful opportunities to hear from stakeholders across the statistical system and wider communities to inform our thinking. Our understanding about the Code has evolved through our regulatory work and through hearing about others’ experience and perspectives. In fact, some changes we’ve made to the Code are the product of engagement we’ve undertaken since the second edition was published in February 2018.

One example of how our thinking has been shaped by those using the Code is the development of the Code Principles. These largely stem from the work of the ONS sustainable development goal (SDG) team. In 2019 the team needed a way to test non-official data sources for use in monitoring some indicators. Their work led to our development of a set of universal principles which were relevant to any analyst. In turn, Code 3.0 builds on these ideas to unpack TQV in 10 principles which can be applied by any analyst.

Active engagement

Engagement is not a purpose but a means to establish purpose. Active engagement was core to the development of Code 3.0. ‘You said / we did’ is an approach that is often used in response to consultations. We used it ourselves after the Code consultation to summarise what we heard and to give a feel for how we were planning to then act. But as a way of reporting engagement, it misses out the nuance and depth of the exchange of insight.

By focusing on engagement through listening and responding, we have been able to establish an interactive and iterative process rather than a one-off sharing of ideas. We saw feedback not just as a series of edits to be made but as many insightful points to be carefully considered within task and finish groups in OSR.

Open to check and challenge

An important element of establishing any dialogue is the exchange of understanding and respect. Listening is critical for this to occur, as is a degree of empathy. But for the dialogue to be successful, it requires all parties to be open to hearing from each other. Central to that is seeing yourself as accountable for your decisions and actions and being courageous enough to invite criticism as you determine how best to meet your responsibilities.

We have embedded an accountability framework within our Code 3.0 package, which was developed after input from David Caplan, who has been an important stakeholder throughout our Code development. David has worked for 20 years in the GSS, was director of research and analytics at the Audit Commission, and we are happy to hear is the incoming Honorary Officer for Public Statistics at the RSS. At a Code/RSS event, David shared his thinking about an accountability model based on his previous experiences at the Audit Commission. This inspired us to take our framework forward.

Our accountability framework highlights that you must be willing to hear critical feedback: to give an account, be held to account and make good. Accountability relies on you first seeing yourself as accountable to others – it is an attitude. Once you have accepted this frame of mind, you can then find ways to make yourself accountable.

I love the simplicity of the accountability framework, although I recognise that following its principles is perhaps easier said than done. Unless you are willing to hear what others have to say, to reflect on it and then to set out what you are doing in response – whether you can or can’t, agree or disagree – you will be working in a bubble, isolated from hearing about ways you may need to evolve and grow. This kind of approach isn’t something new in OSR – we always aim to work to the standards we demand of others. In fact, thinking about and enhancing our own TQV will feature more prominently in our upcoming new three-year OSR strategy.

Call to action (and listening)

So, as you check out Code 3.0, see how it can focus your thinking on what matters in your own context. Make yourself accountable to your colleagues and stakeholders – invite and listen to feedback, be open to check and challenge, and be frank about your decisions and the information that’s informed them.



Related articles

What is it about TQV?

OSR launches a refreshed Code of Practice for Statistics: Embedding Trustworthiness, Quality and Value (TQV)

Trust in Statistics: Launching the Refreshed Code of Practice

OSR asked CEO of the Royal Statistical Society(RSS), Dr Sarah Cumbers, to reflect on the refreshed Code of Practice in a guest blog. The RSS works closely to support OSR’s regulatory work, including partnering with us on the TQV (Trustworthiness, Quality and Value) annual award for voluntary application, that celebrates organisations adopting these principles in their daily practice for everyone to see.

The Code is the essential foundation for our statistical system, underpinning the public trust society relies on by clearly communicating what matters when working with data, and driving up standards.

The RSS has been closely involved in the Code’s review process. We were pleased to host two dedicated roundtables with the OSR, one in November 2023 and another in late 2024, to give our members an opportunity to share their views and engage with OSR on how the Code could evolve.

It’s genuinely encouraging to see how many of the issues raised in those sessions are reflected in the revised Code, including the need for stronger user engagement. This direct line from member input to policy change shows the strength and influence of the statistical community when we speak together. We also submitted a formal response to the consultation, underlining our call for users to be placed at the heart of decision-making.

The Code is much more than a set of standards on paper; it’s a guide to best practice that supports better decision-making across the UK.

Independence is a central theme. In a period when the relationship between statisticians and politicians is under scrutiny globally, the Code’s emphasis on protecting impartial professional judgement is crucial. The integrity of official statistics depends on statisticians being able to work freely and professionally, and the Code provides a vital safeguard for us in the UK.

There’s always more work to do to ensure statistics fully serve the public good, and the RSS is particularly keen to see further progress on user engagement. The revised Code’s call to put users at the centre of the system provides a valuable catalyst for this, and we are looking forward to working with OSR, ONS and the wider government statistical service to discuss and agree exactly what that should mean in practice across the statistical system.

The Code matters because, above all, trust in statistics matters. The refreshed Code is a great step forward, and the RSS looks forward to supporting its use and continuing this conversation with both the OSR and our members.


Related

Code of Practice for Statistics

Code of Practice for Statistics 3.0 – What has changed

Quality Data, Shared Purpose: World Statistics Day 2025 and the Refreshed Code of Practice

In our latest guest blog, Rochelle Tractenberg explores how ethical statistics and the refreshed Code of Practice can help build public trust this World Statistics Day…

Every five years, World Statistics Day celebrates statistics’ and data science’s global contributions to evidence-informed decisions, democratic accountability, human dignity and flourishing, and sustainable development.

The theme of this World Statistics Day 2025 is “quality statistics and data for everyone”, which coincides with the Office for Statistics Regulation (OSR)’s refreshed Code of Practice for Statistics. The timing and orientation of the refreshed Code both highlight its place in promoting ethical statistics practice to help achieve this goal for the people of the UK.

While high-quality and widely available statistics and data certainly exist and are to be celebrated, these don’t just happen; they require diligence, care and competence at all levels by professionals in statistics and data science. World Statistics Day 2025 presents an opportunity to consider how we can increase the visibility of this commitment and work, and their accessibility and utility for everyone.

An under-appreciated challenge to public trust in official statistics and the statistical profession is “drift” – which refers to changes in the properties of data over time. Drift can occur in data source, associated meaning, and ability to accurately represent a concept. Drift can mean that data become less reliable or less consistent over time, impacting the value and trustworthiness of the data. As such, regularly reviewing statistics to see if they meet relevant standards – and critically, if they do not – is crucial to promoting, and in some cases renewing, trust.

I was excited to hear about the refreshed Code of Practice for Statistics – the guiding framework for all official statistics in the UK – which will be live on the Code website from 3 November 2025.

OSR has updated the Code to include broader support for anyone working with or communicating statistics, and to reflect technological advances. The Code’s core principles – Trustworthiness, Quality and Value – have not changed, but its guidelines have been made clearer and more relevant. All official statistics in the UK must meet the requirements of the Code to ensure that they serve the public good. Those that do so are granted accredited official statistics status, indicating that the statistics, and the underlying data, are of high quality. The Code is also useful for those working with data and statistics who want to voluntarily apply it as a practical framework to increase public confidence in statistical work. It can help anyone build public trust in statistics and data science.

Engagement with the Code is worthwhile at the start of, and regularly throughout, data collection and analysis, for anyone who wishes to more actively and transparently step onto the path towards ethical statistical practice. To see my full reflections on building trust in statistics and data science through ethical practices, and how the Code can contribute, please see my recent article. For additional perspectives on the importance of World Statistics Day, please see the International Statistics Institute’s statement.


Rochelle E. Tractenberg is a tenured professor at Georgetown University (Washington, DC). Her applied ethics work focuses on strengthening trustworthiness in statistics and data science across research and policy settings. A biostatistician since 1997, she serves on the UK National Statistician’s Data Ethics Advisory Committee (NSDEC), ISI Advisory Board on Ethics, and the Association of Computing Machinery Committee on Professional Ethics. She has written two books on ethical practice, and has contributed to standards for statistics, data science, and mathematics – as well as the forthcoming UN guide, Ethics in Official Statistics: Foundations and practices to foster public trust.


Related:

Quality statistics and data for everyone: Renewing trust through ethical practice

Welcoming the new Evaluation Registry

At OSR, we support good evidence that informs the public.

Our main focus is on official statistics. But we also recognise the role that evaluation plays in providing insight into what works in government policies and programmes.

That’s why we welcome the brand new Evaluation Registry, launched by the Evaluation Task Force in March 2025. The site will provide a single home for evaluations across Government.

There is at present a lot of great evaluation work taking place across Government led by researchers, economists and other analysts. They are commissioned by their department to look at what’s being done, and to create an evidence base that helps refine, improve and challenge policy.

The issue with this, though, is twofold. First, it can be difficult to know where to find and access evaluation evidence. That in itself is a huge pity. It means that the good evaluation that gets done can sometimes languish in obscurity, and the knowledge it represents may not be accessible to a wide range of people. This inaccessibility is also not in line with the intelligent transparency that we advocate in OSR – it can help underpin public confidence if the evidence that informs decisions is made fully available.

The second issue is that there is scope to increase the coverage of evaluations across Government. In 2019, the Prime Minister’s Implementation Unit reported that only 8% of the Government Major Projects Portfolio had robust evaluation plans in place (here’s a link to the report). This has now increased to 34% in the 2023/34 GMPP portfolio (here’s a link to the report), however, there is still considerable work to be done to improve the quality and quantity of evaluation on our Government’s most complex and strategically significant projects. By making the process and practice of evaluation more transparent, the website will drive greater commissioning and take-up of evaluations.

As the Evaluation Task Force’s blog published today says, the Evaluation Registry brings together evaluation plans and reports in a single, accessible site – which as of June 2025, already contains over 1750 entries!

We have a strong partnership with the Evaluation Task Force and we will work in partnership with them to support high quality evidence. In particular, while the Evaluation Task Force will maintain and oversee the Registry, the Office for Statistics Regulation will engage with Departments where there are delays in publishing evaluations. In this way, we will support transparency and access to the knowledge base provided by evaluations. And our partnership with the Evaluation Task will support Departments to use the Evaluation Registry, and thereby provide maximum value to the public.

As my blog from March 2022 states, we love evaluation in OSR. So we’re delighted to be able to support the advent of the Registry.

Related links: The Evaluation Registry: a new home for Government evaluation

 

The Code, The Key and (for fans of 90s dance music) The Secret

In our latest guest blog, Paul Matthews, Head of Profession for Statistics in Scottish Government, responsible for capability and capacity of the statistics profession, talks about his passion for improvement and how the system can make the statistics that are produced better and have more impact. This blog coincides with the closing of our consultation on proposed changes to the Code of Practice for Statistics, for which we plan to present our findings in the coming months.

I hear a lot of myths about the Code of Practice for Statistics. I hear things like:

  • ‘I know [insert topic here] is very relevant at the moment, but we haven’t preannounced so we can’t publish for at least 4 weeks, because that’s what the Code says’, or
  • ‘We will have issues with the Code of Practice and trustworthiness if we break a time series’, or
  • ‘We need to publish this as a management information release because the Code won’t allow us to publish as official statistics due to quality’.

In these examples, we are thinking of the Code as telling us what we can’t do. I’m not sure why that is. Maybe we tend to think of it as the rule book that we must obey. Maybe it’s because having the ‘rules’ is comforting for us as statistics producers and can give defence if we are challenged.

A key, not a lock

Rather than seeing the Code as telling us what we can’t do, I see it as an enabler to tell us what we can. In other words, it is a key that facilitates the practical release of statistics that provide value for society rather than a lock that prevents us from being responsive and innovative. And this is equally true for the existing version of the Code of Practice and the draft Code 3.0.

Thinking of the Code as a key isn’t carte blanche for us to do whatever we want. There are still risks we need to work through. But in my experience, the Code tends to be supportive of sensible pragmatic things for users that help build trust and transparency rather than being about protocol for protocol’s sake.

Using the Code as a key

I spent a lot of time looking at the Code of Practice when I developed statistical strategic priorities for the Scottish Government Statistics Group. The priorities are about how we can improve statistical work to focus on what provides the greatest value in producing statistics for the public good. It means that there are things we will need to deprioritise given our finite resources.

Lots in this is informed by the enabling nature of the Code. For example:

  • Using user engagement to help inform what users want, what we can discontinue or deprioritise, and being transparent with analysis plans to convey what we’re doing.
  • Greater clarity and impact of communications to enable publications to be focused and streamlined.
  • Greater use of data sources where timeliness trades off against accuracy, or greater use of granular-level data where appropriate to provide useful new analysis that is fit for purpose for users’ needs.

We have had great support and advocacy for what we’re trying to do in Scotland from everyone in OSR, and it gives us confidence that how we’re innovating is in line with how the Code was designed. As Ed Humpherson said in his response to us on the priorities:

“We support your approach and there are several features that we regard as best practice, including the identification and communication of priorities for each analytical area; the involvement of users; and the openness about the potential for suspensions or changes to some of your current outputs… we certainly would not want to require you to keep all of your current range of statistical outputs if they were no longer aligning with user need”.

When Code 3.0 is finalised, all statistics producers should read it carefully and use it as a key to enable opportunities in the statistics they produce.

That’s the secret, after all!

Lessons in communicating uncertainty from the Infected Blood Inquiry: What to say when statistics don’t have the answers

In this guest blog, Professor Sir David Spiegelhalter, Emeritus Professor of Statistics at the University of Cambridge, reflects on his experiences in the Infected Blood Inquiry and the importance of transparency around statistical uncertainty.

In my latest book, The Art of UncertaintyI discuss the UK Infected Blood Inquiry as a case study in communicating statistical uncertainty. In the 1970s and 1980s, tens of thousands of people who received contaminated blood products contracted diseases including HIV/AIDS and hepatitis. Many died as a result. This crisis, with its catastrophic consequences, was referred to as ‘the worst treatment disaster in the history of our NHS’.

The Infected Blood Inquiry was set up in 2018 after much campaigning by victims and their families. I was involved in the Statistics Expert Group established as part of the Inquiry.

Building a model for complex calculations

Our group was tasked with answering a number of questions surrounding the events, such as how many people had been infected with hepatitis C through contaminated blood transfusions.

Some conclusions were relatively easily reached. We could be reasonably confident in data and its verification, such as that around 1,250 people with bleeding disorders were diagnosed with HIV from 1979 onwards.

Other figures proved much more difficult to estimate, such as the number of people receiving ordinary blood transfusions who were infected with hepatitis C, before testing became available. We needed a more sophisticated approach that did not involve counting specific (anonymous) individuals but looked at the process as a whole. Consequently, we established a complex statistical model to derive various estimates. However, due to the lack of data available for some parts of the model, expert judgement was at times necessary to enable it, so we had to account for multiple sources of uncertainty.

Using this model, we were able to produce numbers that went some way to answering the questions we were charged with. However, some figures came with very large uncertainty due the inherent complexity involved in their calculation, so we could not be reliably sure of their accuracy.

A scale for communicating uncertainty

To prevent people from placing undue trust in our findings, we wanted to express the considerable caution that should be taken when considering our analysis. For this, we found the scale used in scientific advice during the COVID-19 pandemic to be a helpful model, in which confidence is expressed in terms of low through to high.

This scale was liberating; it allowed us to clearly convey our level of confidence in a way that accurately reflected the reality of the numbers. So, we could say that we only had moderate confidence that the available data could answer some of the questions we had been asked. And for others – for example, how many people had been infected with hepatitis B – we refused to provide any numbers, on account of having low confidence in being able to answer the question.

Lessons for the statistical community about communicating uncertainty

It can be difficult to admit to substantial uncertainty in data when dealing with a tragedy such as this. In the case of the Infected Blood Inquiry, this lack of clarity meant that the victims and their families were unable to have answered, in any precise way, various questions for which they deserved some kind of closure.

It is also undeniably important, however, that those producing statistics are open about how confident they are in their numbers, so that people understand when statistics can reliably answer their questions, and when they cannot. Indeed, being transparent about any uncertainty in published data is one of the principles that the Office for Statistics Regulation (OSR) promotes in its intelligent transparency campaign and its championing of analytical leadership to support public understanding of, and confidence in, the use of numbers by government.

Intelligent transparency demands that statistical claims and statements are based on data to which everyone has equal access, are clearly and transparently defined, and for which there is appropriate acknowledgement of any uncertainties and relevant context. This concept helps us understand how to communicate our findings when we are asked to answer questions regardless of the quality of available evidence. And it acknowledges that publishing numbers without appropriate context, clarifications and warnings is counterproductive to providing real public value.

So, when it comes to communicating statistics to the public, honesty – or transparency, as we call it here – really is the best policy. I am delighted to see OSR placing more emphasis on intelligent transparency, and how statistics are communicated more generally, in its proposals for a refreshed Code of Practice. Ed Humpherson has also written an excellent blog on why communicating uncertainty is a constant challenge for statisticians.

Data in debate: The role of statistics in elections

In our latest blog, our Head of Casework and Director General set out the guidance and support available for navigating statistics during an election campaign, and our role in publicly highlighting cases where statistics and data are not published or presented in a misleading way.

Intelligent transparency is something we talk about a lot in OSR. It involves taking an open, clear, and accessible approach to the release and use of data and statistics by default. It’s something we care about deeply, as public confidence in publicly quoted statistics is best enabled when people can verify and understand what they hear.

Taking a transparent approach by default will be particularly important during the upcoming general election campaign, where statistics will likely play a role in informing decisions made by the electorate but opportunities for governments to publish new analysis will be restricted. This is because in the weeks leading up to an election, known as the pre-election period, the Cabinet Office and Devolved Administrations set rules which limit public statements or the publishing of new policies and outputs.

Official statistics are unique in this respect as routine and preannounced statistics can continue to be published during this time, in line with the Code of Practice for Statistics. However, given that the pre-election ushers in a period of public silence for most government department activity, the publication of new information should be by exception. Any public statements made during the pre-election period should only refer to statistics and data that are already in the public domain to ensure that the figures can be verified and to avoid the need to publish new figures.

Part of our role as a statistics regulator is to promote and safeguard the use of statistics in public debate. We do not act to inhibit or police debate, and we recognise that those campaigning will want to draw on a wide range of sources, including statistics, to make their case for political office. Nevertheless, we will publicly highlight cases where campaigning parties have made statements that draw on statistics and data that are not published or presented in a misleading way.

Our interventions policy guides how we make these interventions, but we recognise that election campaigns require particularly careful judgement about when to intervene. This is why we’ve published our Election 2024 webpage, which brings together our guidance and support on election campaigns. This includes new guidance on the use of statistics in a pre-election period for government departments which sets out our expectations for how they should handle cases where unpublished information is referred to unexpectedly.

Reacting to misuse is not our only tool. This election, we want to do more up front to help people navigate through the various claims and figures thrown about during an election. This is why we are launching a series of explainers on key topics that will cover what to look out for and the common mistakes in public statements that we have seen through our casework across topics which are likely to feature in an election campaign.

We are also working in partnership with other organisations and regulators whose vision is aligned with ours and who support the good use of evidence in public debate. Our hope is that as a collective, we can contribute to the effective functioning of the election campaign.

We are not an all-purpose fact-checking organisation, nor are we the regulator of all figures used in public statements. However, while we can’t step into every debate, we will take all the necessary steps we can to ensure that the role of statistics in public debate is protected and that the electorate is not misled.

Anyone can raise a concern about the production or use of statistics with us. You can find out more about our remit and how to raise a concern with us by visiting our casework page.

 

What do young people think about statistics?

In our latest blog, Head of External Relations, Suzanne Halls, explores what young people think about statistics, following a chance encounter on a train…

It’s common to hear two claims: that young people are disengaged with policy; and that people of all ages are disengaged with statistics and data, and have low levels of statistical literacy. In OSR, we are sceptical about both claims – especially the claim that people are disengaged with statistics. We know this from our casework, which features a wide range of people raising questions about the use of statistics. And we know it from our wider observations about the social life of statistics, not least during the pandemic.

Sometimes, though, it’s good to substantiate general opinions with on-the-ground evidence. In this context, it’s a good idea for OSR to test our broadly positive take on statistics and data in society with what people actually say and do.

And this is exactly what happened to me earlier this year. I was on a train, and I overheard a group of young people talking about the importance of statistics and data. I got talking to one of them, called Gilbert, who is a student studying his GCSE’s from Hertfordshire.

I was struck by his enthusiasm for data and statistics, and I wanted to get a fuller read out from him of what he thought. So I was delighted when he agreed to have a further chat with me about how he understands and uses numbers – and I’ve set out his responses to my questions below. They speak for themselves, I think, so I’ve set them out more or less as he said them.

Why are you interested in statistics?

I like statistics because they give a comprehensive view of problems and help you work out solutions and predictions.

How do you think statistics help us?

I think that statistics are very important in the modern world because they act as the backbone of the economy and government decisions. They are also the way most data is presented in professional settings.

What are the benefits of statistics for young people?

Good statistics have the benefit of letting us completely understand the world we are going into and help us work out ways to improve it through technology and engineering.

Do young people need more of a say on data collection and use?

Yes, I think that teenagers and children need to be taught more about what is being taken from them when they accept ‘cookies’. At the moment I feel companies can put anything in their terms of service and get away with it and I feel we need to regulate this more heavily.

What questions do you think official statistics should be asking young people?

I think that official statistics should be asking more questions about activity with technology and more in-depth questions about climate change, I feel that if surface level questions are asked there is less chance of the young person engaging.

How could statistics producers across government engage more with younger audiences?

Nowadays the younger generation interact more over social media like Snapchat or Tik Tok. This means that less young people are seeing conventional ads on TV. If statistics produces condensed the facts into short entertaining videos and put them on platforms such as Tik Tok there is a high chance more young people would engage with them.

Do you think you are taught enough about statistics in schools?

No, I feel that we need to be taught more about statistics to be able to interact and understand the world that older generations are leaving us with, such as the way politics are run at the moment and more importantly how to try and stop or preferably reverse climate change.

How do you interact with data?

I don’t really have a favourite way of interacting with data, I prefer dashboards with multiple graphs but am not overly fussed.

Where do you go for statistical information?

At the moment I use the Hustle to find business and political related statistics and sources such as the guardian for world statistics. However, the problem is there arn’t that many sites for finding out statistics and lots of people don’t try and find them.

Thanks so much for sharing your views. What is your favourite statistical fact?

I am absolutely fascinated by the fact that the human eye blinks on average 4,220,000 times a year!


As I said, I think this speaks for itself; and although it’s only one example, it does provide an inspiring example to rebut the idea that young people are disengaged with statistics and data.

And this evidence is just a taster – we recently published a think piece and research report on the concept of statistical literacy and the importance of communicating effectively, which is well worth a read!

At the Office for Statistics Regulation we are interested in hearing views from everyone on statistics and how they are used, we encourage you follow our twitter, read our newsletter, visit our website and contact us with any thoughts or questions you might have.

An army of armchair epidemiologists

Statistics Regulator, Emily Carless explores the work done to communicate data on Covid-19 publicly, from inside and outside the official statistics system, supporting an army of armchair epidemiologists. 

In 2020, our director Ed Humpherson blogged about the growing phenomenon of the armchair epidemiologist. Well, during the pandemic I became an armchair epidemiologist too. Or maybe a sofa statistical story seeker as I don’t have an armchair! Even though I lead our Children, Education and Skills domain rather than working on health statistics, I couldn’t help but pay close attention to the statistics and what they could tell me about the pandemic

At the micro-level I was looking at the dashboards on a near daily basis to understand the risks to myself, my family, my friends and my colleagues. I was watching the numbers of cases and hospitalisations avidly and looking at the rates in the local areas of importance to me. I felt anxious when the area where my step-sister lives was one of the first to go the new darkest colour shortly before Christmas 2021, particularly as my dad and step-mum would be visiting there soon afterwards. Earlier in the pandemic, once we were allowed to meet up, my mum and I had used these numbers to inform when we felt comfortable going for a walk together and when we felt it was better to stay away for a while to reduce the risk of transmission. These statistics were informing real world decisions for us.

At a macro-level I was also very interested in the stories the statistics were telling about the pandemic at a population level. The graphs on the dashboards were doing a great job of telling high level stories but I was also drawn to the wealth of additional analysis that was being produced by third parties on twitter. My feed was full of amazing visualisations that were providing additional insight beyond that which the statistical teams in official statistics producer organisations had the resources to produce.

As we highlighted in our recent State of the Statistical System report, the COVID-19 dashboard has remained a source of good practice. The dashboard won our Statistical Excellence in Trustworthiness, Quality and Value Award 2022. The ability for others to easily download the data from the COVID-19 dashboard to produce visualisations and bring further insight has been a key strength. I wanted to write this blog to further highlight the benefits of making data available for this type of re-use. I think Clare Griffith’s (lead for UK COVID-19 dashboard) tweet back in February sums it up perfectly. In response to one of the third-party twitter threads she said ‘Stonking use of dashboard data to add value. Shows what can be done by not trying to do everything ourselves but making open data available to everyone.’ 

Here are a couple of my favourite visualisations (reproduced with permission). 

Like Clare, I really like Colin Angus’ (@VictimOfMaths) tapestry by age. It shows the proportion of confirmed Covid-19 cases in England by age group and how that changed during the pandemic. I also liked the way the twitter thread explained the stories within the data and that they made the code available for others. 

I also liked Oliver Johnson’s (@BristOliver) case ratio (logscale) plots. Although the concept behind them may have been complex, they told you what was happening with cases/ hospitalisations. The plot shows the 7-day English case ratio by reporting date on a log scale using horizontal lines to show where the case ratio showed a two or four week doubling or halving.

There was great work being done to communicate data on Covid-19 publicly from outside the official statistics system, supporting an army of armchair epidemiologists. This demonstrates the changing statistical landscape of increased commentary around official statistics, which we referenced in the latest State of the Statistical System report, at its best. Much of this was made possible by the Covid-19 dashboard team making the data available to download in an open format through an API with good explanations and engaging on social media to form a community around those data. We hope that this approach can be replicated in other topic areas to maximise the use of data for the public good.