The Code, The Key and (for fans of 90s dance music) The Secret

In our latest guest blog, Paul Matthews, Head of Profession for Statistics in Scottish Government, responsible for capability and capacity of the statistics profession, talks about his passion for improvement and how the system can make the statistics that are produced better and have more impact. This blog coincides with the closing of our consultation on proposed changes to the Code of Practice for Statistics, for which we plan to present our findings in the coming months.

I hear a lot of myths about the Code of Practice for Statistics. I hear things like:

  • ‘I know [insert topic here] is very relevant at the moment, but we haven’t preannounced so we can’t publish for at least 4 weeks, because that’s what the Code says’, or
  • ‘We will have issues with the Code of Practice and trustworthiness if we break a time series’, or
  • ‘We need to publish this as a management information release because the Code won’t allow us to publish as official statistics due to quality’.

In these examples, we are thinking of the Code as telling us what we can’t do. I’m not sure why that is. Maybe we tend to think of it as the rule book that we must obey. Maybe it’s because having the ‘rules’ is comforting for us as statistics producers and can give defence if we are challenged.

A key, not a lock

Rather than seeing the Code as telling us what we can’t do, I see it as an enabler to tell us what we can. In other words, it is a key that facilitates the practical release of statistics that provide value for society rather than a lock that prevents us from being responsive and innovative. And this is equally true for the existing version of the Code of Practice and the draft Code 3.0.

Thinking of the Code as a key isn’t carte blanche for us to do whatever we want. There are still risks we need to work through. But in my experience, the Code tends to be supportive of sensible pragmatic things for users that help build trust and transparency rather than being about protocol for protocol’s sake.

Using the Code as a key

I spent a lot of time looking at the Code of Practice when I developed statistical strategic priorities for the Scottish Government Statistics Group. The priorities are about how we can improve statistical work to focus on what provides the greatest value in producing statistics for the public good. It means that there are things we will need to deprioritise given our finite resources.

Lots in this is informed by the enabling nature of the Code. For example:

  • Using user engagement to help inform what users want, what we can discontinue or deprioritise, and being transparent with analysis plans to convey what we’re doing.
  • Greater clarity and impact of communications to enable publications to be focused and streamlined.
  • Greater use of data sources where timeliness trades off against accuracy, or greater use of granular-level data where appropriate to provide useful new analysis that is fit for purpose for users’ needs.

We have had great support and advocacy for what we’re trying to do in Scotland from everyone in OSR, and it gives us confidence that how we’re innovating is in line with how the Code was designed. As Ed Humpherson said in his response to us on the priorities:

“We support your approach and there are several features that we regard as best practice, including the identification and communication of priorities for each analytical area; the involvement of users; and the openness about the potential for suspensions or changes to some of your current outputs… we certainly would not want to require you to keep all of your current range of statistical outputs if they were no longer aligning with user need”.

When Code 3.0 is finalised, all statistics producers should read it carefully and use it as a key to enable opportunities in the statistics they produce.

That’s the secret, after all!

A reason to be optimistic: sharing and linking data on road traffic collisions

In our latest blog, Head of OSR Ed Humpherson discusses how data sharing and linkage can provide vital insight into the problems and potential solutions when looking at road traffic collision data.

At the start of 2025, OSR published a rather optimistic piece on the potential for data sharing and linkage. Data sharing and linkage can yield new insights, identify previously hidden problems, and highlight what works and what doesn’t. It has huge potential to serve the public good.

But it’s also difficult to achieve, and there are still lots of frustrated researchers who have not been able to progress their work because they can’t access the data that they need.

So why are we optimistic? Partly, it’s a top-down perspective: we’ve seen progress through the increasing maturity of the UK-wide facilitation of data sharing and linkage provided by the excellent Administrative Data Research UK, reflected in programmes like the Ministry of Justice’s Data First.

But it’s also because, in some specific policy areas, there is a growing bottom-up drive to make better use of datasets by linking them to others, and enhancing the insight that they can provide.

Developments in data on road traffic collisions provide the best grounds for my optimism. The Department for Transport (DfT) publishes a long-standing data set on road traffic fatalities. The statistics show that the UK does well in international comparisons of road traffic fatalities per capita. They are based on a consistent set of categories for recording traffic collisions by police forces in England, Wales and Scotland, using a system called STATS19. They are well presented and clearly explained.

But the STAT19 data set has some limitations. The data series does not capture all traffic collisions, nor does it record all injuries. And as with all data based on police recording, the incidents recorded are those that come to the police’s attention – and not all do. To its credit, DfT is clear about these limitations in its annual statistical release.

Moreover, the picture painted by the traffic fatalities statistics can hardly be described as positive. Every fatality is a personal tragedy, impacting the families and friends of those involved in a deep and difficult way. And the long-term declines in fatalities seem to have stalled over the last decade, as shown in Chart 1 in the annual report here:

Figure 1: All road users killed in traffic collisions in Great Britain, 1979 to 2023

The chart shows a decline in road users killed in traffic collisions in Great Britain from 1979, with the decline slowing from 2013 – 2023. The chart was originally published on the Department for Transport website. The data can be found here.

So, we should welcome anything that can give us more insight into the problems and potential solutions. This is where linked data comes in. By linking STATS19 data to ambulance data and hospital records, we can get a much richer picture of collisions – where they happen; who is affected and, just as importantly, the full extent of their injuries; how the victims are treated by the health care system; and the outcomes of their treatment. And this information can in turn help answer important questions, like why it is that the reductions in fatalities appear to have stalled, and whether there are practices and interventions that can reduce collisions and increase people’s survival chances.

The potential for the linkage of STATS19, ambulance and hospital data is the basis of an excellent paper by Seema Yalamanchili of Imperial College (PDF download), which in turn was the starting point for a round table I attended in January. The meeting was convened by the RAC Foundation and took place at the Royal Automobile Club. Seema presented her paper, setting out the case for this data linkage, the barriers to linking the STATS19 data – technical, legal and cultural barriers alike – and crucially, laid out a clear plan for addressing these barriers.

The meeting at the RAC Foundation was one of the most constructive, positive meetings that I’ve attended on data sharing and linkage. It was chaired by the RAC Foundation, and included people who produce the official statistics for the Department for Transport and the Department of Health and Social Care; NHS England; policy and scientific leaders from those departments; surgeons who work in trauma care; transport and health researchers; and data governance experts.

A lot of the meetings I’ve attended on data sharing and linkage focused on setting out all the barriers and constraints. And there are indeed a number of challenges. First of all, in any endeavour of this kind, the project should test whether what it is proposing is publicly acceptable. This needs to be done through a process of public involvement that listens to how people feel about linking sensitive pieces of information.

Then there is the legal authorisation – is what is proposed lawful, and who needs to approve it? This element can be complex and time-consuming, as any researcher who has proposed working with healthcare data can attest.

And beyond these ethico-legal considerations, how technically feasible is the linkage? Do the datasets have enough common identifiers for records to be linked with a reasonable degree of confidence? How easy is it to link a record of a road traffic injury to the trauma centre where the patient is treated?

All these issues – public perception, legal context, technical data quality and linkability – are complex in their own right. It can take a lot of time to work through each of them. But underlying these substantive issues lurks a deeper issue: it seems as though the culture of data-owning organisations is not always conducive to data linking. This could be for a range of reasons, including risk aversion or a lack of incentives. Whatever the cause, the result is that organisations are less supportive of data linking than their leaders claim to be.

The RAC Foundation meeting was different from many others I’ve attended on linkage. Led by Seema’s presentation, and drawing on her paper, it focused less on the barriers themselves, and more on what attendees can do collectively to address them. We all focused on what can be done, not what can’t be done. For example, the DfT lead statistician said that, if the linkage took place, he would be keen to include insights from the linked dataset in the annual publication.

The meeting ended with a clear commitment to take the work forward: to enrich the official statistics on road traffic collisions; to link data for more insight into trauma care; and to make a difference to a societal problem that continues to devastate victims and loved ones. Within a couple of weeks of the RAC meeting, a working group involving all the key players has sprung up. All this points to a building momentum for change.

Of course, it may be that there are further challenges ahead. But this project shows that, with creativity, ambition and focus, progress is possible – and that cultural barriers to data linkage are by no means fixed. I hope this approach becomes the norm when people seek to use data to serve the public good.

So, why am I optimistic? Because of initiatives like this.

Embracing Challenge for Change

Hear about the Office for Environmental Protection’s journey of applying Trustworthiness, Quality and Value (TQV) from Darren Watson, their Principal Environmental Analyst, in this guest blog on embracing the Code of Practice for Statistics

How would a new organisation understand, and most importantly communicate, how – or indeed whether – the government is improving the natural environment? How does it make sense not only of the state of the environment but also the many policies and strategies across government that affect it?

These are the challenges that the Office for Environmental Protection (OEP) has had to confront since its creation a little over 3 years ago.

One way to examine progress is through statistics – in our case, using data such as the area of woodland, or the number of water pollution incidents to present trends in key environmental indicators.

Deciding what to measure and what to present in a concise, understandable way that provides value to stakeholders, and ultimately contributes to improving the environment, is tricky. The difficulty lies in the range of policies and strategies in place, government targets and commitments, the numerous sources of pollution and their wide-ranging impacts, and the priorities and concerns of our stakeholders. Just listing some of our stakeholders – parliamentarians and policy makers, policy influencers, the media and the public – has been enough to make my head spin.

The challenge of measuring progress was evident in our first annual report on government progress in improving the environment, which we laid before Parliament in January 2023. At that stage we presented 32 indicators to measure the environment in England.

The work doesn’t stop there, however. Our progress reports need to evolve in response to the environment and policy landscapes, so we can never stand still. Our team therefore challenges itself to continually improve our assessment, to provide greater value for our users and the environment, and to respond to stakeholder feedback.

So, while we use others’ statistics (from bodies like the Environment Agency), rather than produce them ourselves, we are committed to applying the same high standards to our analyses.

As such, we decided to voluntarily adopt the Code of Practice for Statistics when developing our second progress report. The Code sets the standards to which producers of accredited official statistics (previously called ‘National Statistics’) are held. It is, in short, the best way to hold ourselves and our assessment to account, through striving to meet its three pillars: Trustworthiness, Quality and Value (TQV).

The most visible innovation in our application of TQV was our first Methodological Statement, published alongside our second progress report. At 95 pages, it was not lacking detail. But producing a report is the easy part; it is then how it is used, and the value it provides, that is most important.

So, a particularly proud moment for the team that produced the Methodological Statement came when our chair used it at the House of Lords Environment and Climate Change Committee to demonstrate the robustness of our recommendations. A tick in the box for trustworthiness and transparency.

But there is another example of our more fundamental use of TQV and its true value. And this goes back to our indicators.

Because the environment, the factors affecting it, our stakeholder needs and government are dynamic, our indicators must be too. They must adapt, and this is where using TQV – particularly the Q and the V – is key.

Our assessment process does not stop with the publication of our latest progress report every January. Following publication, we take stock of our progress and review and consult on our assessments. We also review those indicators, which is where the lenses of Quality and Value give us an ideal framework through which to challenge ourselves and be challenged by our stakeholders.

For Value, we ask ourselves:

Are our indicators still relevant to users? Do they, as a whole and individually, provide clarity and insight without making our assessment inaccessible? And do they improve and enhance our understanding, and our users’ understanding, of the environment?

For Quality, we consider:

Are we still using the most-suitable data sources? Have the data producers changed anything or stopped updating their statistics? Are our methods still sound and reflective of best practice? And how can we improve our data handling and quality assurance?

This is quite a list. But practically, the QV challenge has enabled the evolution of our indicators, with 23 new indicators included and 16 amendments made to existing indicators through our second and third reports. This work is driven by our vision to better understand those numerous factors that determine progress in improving the environment and to provide greater value and quality for our users. We hope this effort is demonstrated through the increasing awareness and influence of our assessments and their recommendations.

Our commitment to TQV and continuous improvement goes beyond this work. We are using TQV to examine how we assess trends and whether more statistically robust methods are available. We are also building an improved data system to increase the accuracy, quality and speed of our trend assessments, whilst supporting our ambitions to present data more effectively, including using maps.

So, to conclude: the OEP aims to be a trusted voice, and we are committed to being an evidence-led organisation that holds government and other public authorities to account. What better way to support this than to adopt TQV and show the users of our reports the effort we take to meet those aims?

Oh, and please take a look at our reports – we really do value feedback.

Beyond GDP: Redefining Economic Progress

Gross domestic product (GDP) is invariably a focus of debate about the state of the economy and whether this country, or any other, is making progress.

Yet it has its critics. Some complain that the focus on GDP as the single measure of progress distorts our priorities. They argue that GDP blinds us to many other important ways in which society should flourish.

There are indeed good grounds for thinking GDP is an incomplete measure of societal progress. Case in point: it has up to now omitted the depletion of the natural world, though this will be addressed in the upcoming updates to the international standards for the compilation of GDP – the new System of National Accounts WS.6 covers depletion.

Moreover, GDP does not capture lots of types of worthwhile activity that are not paid for (caring for a relative, for example) but does capture activities that are not worthwhile (like trading in illicit drugs).

Over 2024, we saw the maturing of several endeavours to address this issue. They fall into two camps. The first involves using the framework of GDP (or national accounts, to be more precise) to create a more comprehensive measure of growth and wealth. The second looks to develop adjacent measures that focus more directly on well-being and prosperity.

Let’s begin with approaches that focus on GDP.

One solution to this problem is to enrich the idea of GDP – to measure more things within the concept of the economy, like income, capital and growth. For example, GDP could measure the value of work done in the house and could incorporate the depletion of the natural world. A recent speech by the UK Statistics Authority Chair, Sir Robert Chote, highlights international work to widen what is captured as capital in economic measurement, and in particular to include natural capital.

A good example of an attempt to enrich GDP is the ONS’s recent inclusive income release. It supplements the standard GDP measures of output with measures of unpaid work, the costs of depleting the natural world and some elements of cultural wealth. The ONS summarises it well:

“Inclusive income estimates provide a broader measure of the economic welfare of the UK population. They reflect the economic value of both paid activity, included in gross domestic product (GDP), and unpaid activity, which includes ecosystem services and unpaid household services. The result is measures of economic progress that include activity and assets beyond those currently included in GDP.”

It’s an interesting endeavour, and provides some notable insights. For example, unlike GDP per person, inclusive income per person has not yet returned to its pandemic peak. In short, I applaud the ONS’s ambition in taking on this difficult and methodologically challenging work – though it has initiated a lot of debate within OSR, which my colleague Jonathan Price will highlight in a subsequent blog.

The second approach suggests that we should keep our focus on GDP more or less as it is (subject to the usual improvements and use of better data sources, as well as the communication of uncertainty; see our GDP review on this). And instead of extending the framework, it proposes supplementing it with meaningful alternative measures, including personal well-being measures, which focus on the things that GDP does not capture well. The ONS in fact provides an important foundation for these alternatives with its Measures of National Wellbeing.

A great example of the personal well-being approach is provided by Pro Bono Economics (PBE)’s recent report on the state of well-being in the UK, which estimates the number of people in the UK with low well-being. The report highlights what can only be described as a crisis of low well-being in the UK. (And full disclosure: I am a trustee of PBE).

The PBE report is not the only work that focuses on non-GDP measures of well-being:

  • Along similar lines, the BeeWell project has proposed measuring children’s well-being, and has been implemented in Manchester, Hampshire and the Isle of Wight.
  • Carnegie UK’s Life in the UK provides a comprehensive overview of well-being. It extends the analysis from personal well-being to broader societal well-being, including perceptions of democratic health.
  • Complementing this UK-level perspective, the Global Prosperity Institute’s work is also noteworthy. It is more granular and micro, considering the prosperity of small areas using a citizen research approach. Its application to areas of East London is rich in insights into the experience of redevelopment.

What these various outputs show is that the “Beyond GDP” space is maturing. The ONS is doing some thoughtful, innovative things to extend the framework of national accounts. And a plethora of independent approaches are emerging.

So I begin this year optimistically.

Could 2025 be the year Beyond GDP moves from being a slogan to a reality for policymakers, Parliament, media, and, most importantly, citizens?

 

 

Lessons in communicating uncertainty from the Infected Blood Inquiry: What to say when statistics don’t have the answers

In this guest blog, Professor Sir David Spiegelhalter, Emeritus Professor of Statistics at the University of Cambridge, reflects on his experiences in the Infected Blood Inquiry and the importance of transparency around statistical uncertainty.

In my latest book, The Art of UncertaintyI discuss the UK Infected Blood Inquiry as a case study in communicating statistical uncertainty. In the 1970s and 1980s, tens of thousands of people who received contaminated blood products contracted diseases including HIV/AIDS and hepatitis. Many died as a result. This crisis, with its catastrophic consequences, was referred to as ‘the worst treatment disaster in the history of our NHS’.

The Infected Blood Inquiry was set up in 2018 after much campaigning by victims and their families. I was involved in the Statistics Expert Group established as part of the Inquiry.

Building a model for complex calculations

Our group was tasked with answering a number of questions surrounding the events, such as how many people had been infected with hepatitis C through contaminated blood transfusions.

Some conclusions were relatively easily reached. We could be reasonably confident in data and its verification, such as that around 1,250 people with bleeding disorders were diagnosed with HIV from 1979 onwards.

Other figures proved much more difficult to estimate, such as the number of people receiving ordinary blood transfusions who were infected with hepatitis C, before testing became available. We needed a more sophisticated approach that did not involve counting specific (anonymous) individuals but looked at the process as a whole. Consequently, we established a complex statistical model to derive various estimates. However, due to the lack of data available for some parts of the model, expert judgement was at times necessary to enable it, so we had to account for multiple sources of uncertainty.

Using this model, we were able to produce numbers that went some way to answering the questions we were charged with. However, some figures came with very large uncertainty due the inherent complexity involved in their calculation, so we could not be reliably sure of their accuracy.

A scale for communicating uncertainty

To prevent people from placing undue trust in our findings, we wanted to express the considerable caution that should be taken when considering our analysis. For this, we found the scale used in scientific advice during the COVID-19 pandemic to be a helpful model, in which confidence is expressed in terms of low through to high.

This scale was liberating; it allowed us to clearly convey our level of confidence in a way that accurately reflected the reality of the numbers. So, we could say that we only had moderate confidence that the available data could answer some of the questions we had been asked. And for others – for example, how many people had been infected with hepatitis B – we refused to provide any numbers, on account of having low confidence in being able to answer the question.

Lessons for the statistical community about communicating uncertainty

It can be difficult to admit to substantial uncertainty in data when dealing with a tragedy such as this. In the case of the Infected Blood Inquiry, this lack of clarity meant that the victims and their families were unable to have answered, in any precise way, various questions for which they deserved some kind of closure.

It is also undeniably important, however, that those producing statistics are open about how confident they are in their numbers, so that people understand when statistics can reliably answer their questions, and when they cannot. Indeed, being transparent about any uncertainty in published data is one of the principles that the Office for Statistics Regulation (OSR) promotes in its intelligent transparency campaign and its championing of analytical leadership to support public understanding of, and confidence in, the use of numbers by government.

Intelligent transparency demands that statistical claims and statements are based on data to which everyone has equal access, are clearly and transparently defined, and for which there is appropriate acknowledgement of any uncertainties and relevant context. This concept helps us understand how to communicate our findings when we are asked to answer questions regardless of the quality of available evidence. And it acknowledges that publishing numbers without appropriate context, clarifications and warnings is counterproductive to providing real public value.

So, when it comes to communicating statistics to the public, honesty – or transparency, as we call it here – really is the best policy. I am delighted to see OSR placing more emphasis on intelligent transparency, and how statistics are communicated more generally, in its proposals for a refreshed Code of Practice. Ed Humpherson has also written an excellent blog on why communicating uncertainty is a constant challenge for statisticians.

Culture, psychological safety, and the impact on quality

As part of our series of blogs examining topics addressed in the proposed refreshed Code of Practice for Statistics, Dr James Tucker, Deputy Director for Health, International and Partnerships at the Office for National Statistics, shares his thoughts about psychological safety.

We’ve all heard the phrase “we need a culture change” at some point in our careers. But what exactly does the culture of an organisation entail, and how does it influence the quality of our work? Without a deep understanding of what constitutes culture, it’s easy for it to become just another buzzword.

My interest in this topic grew significantly when I established the Government Data Quality Hub, which aims to improve data quality across government. While having the right standards, guidance, and methods is crucial, these need to be coupled with the right organisational environment to facilitate quality improvement.

This exploration led me to the concept of “psychological safety”. Psychological safety refers to an environment where everyone feels respected, can share their views, and has positive intentions towards one another. In the context of improving the quality of our statistics, this means people feel comfortable raising issues, challenging the status quo, and proposing new ideas without fear of being perceived as negative or intrusive.

Have you ever had an idea that you believed could lead to significant improvements but hesitated to share it due to fear of negative consequences if it didn’t work out? Unfortunately, this is a common scenario and is an example of a cognitive bias called loss aversion. This bias can lead to various unwanted outcomes, such as unshared ideas and overlooked errors. In summary, silence enables mistakes and prevents improvements.

Creating a psychologically safe environment is a collective responsibility, but it is particularly important for leaders to take the initiative. This goes beyond simply saying “please speak up” or “my door is always open”. It involves actively inviting contributions, creating various forums and groups for the exchange of ideas, and ensuring people have opportunities to get to know one another. Governance plays a critical role here; people need to know where to raise issues, how to escalate them, and who is responsible for what.

I strongly believe that psychological safety is fundamental to developing a quality culture in all organisations. I am always delighted to discuss this topic further, so please feel free to get in touch with me.

The Office for Statistics Regulation (OSR), in its proposal for a refreshed Code of Practice for Statistics, guides statistical leaders to “encourage a quality culture that promotes good practice”, including by “provid[ing] a safe environment and support[ing] staff in raising quality concerns”.

We believe that when producer teams feel safe to share new ideas and raise concerns, and are supported in these efforts, the public can have greater confidence in the quality of the statistics delivered.

OSR has itself actively sought and embraced feedback from a range of people in developing its proposal. It is through these conversations that we hope to further strengthen the Code and increase its value not just for the public but all those who use it.

In the spirit of welcoming challenge and being open to innovation, we continue to invite all views on the proposed changes to the Code in our consultation.

 

 

Quality under challenge: Regulating statistics and data from the Labour Force Survey

In our latest blog, Head of Assessment, Siobhan, discusses the challenges of regulating statistics and data from the Labour Force Survey.

Many concerns have been raised about the quality of the Labour Force Survey (LFS) produced by the Office for National Statistics (ONS), the statistics produced from it and the challenges ONS faces in delivering the online Transformed Labour Force Survey (TLFS) that will replace it.  

So, what are we, as the Office for Statistics Regulation (OSR), doing about it? Our key interventions include: 

  • removing the LFS’s status as accredited official statistics to signal quality concerns to users 
  • where there are quality concerns, removing the accreditation of statistics based on LFS data and associated Annual Population Survey (APS) data. To date, we have removed the accreditation from 14 statistical outputs 
  • setting requirements for ONS to improve its communication and engagement, and to consider the lessons that can be learnt from the LFS 
  • reviewing ONS’s work to develop an online replacement for the LFS, the TLFS 

What is the Labour Force Survey?

The LFS is the main household survey providing labour market information in the UK. It is ONS’s largest regular household survey, outside of the census, and has been running since the 1970s.  

Statistics calculated using the LFS (and the related Annual Population Survey, also produced by ONS) are vital to understanding the world we live in. These statistics are used to estimate the number of people employed, unemployed and economically inactive across the UK, in different parts of the UK, and for different groups, such as young people, older people and those with disabilities. These statistics also inform key economic decisions, such as interest rates set by the Bank of England and government tax and spending. 

De-accrediting statistics derived from the Labour Force Survey 

OSR’s primary regulatory role is to independently review statistics to ensure that they comply with the standards of Trustworthiness, Quality and Value in the Code of Practice for Statistics. When they meet these standards, we give them ‘accredited official statistics’ status, and when statistics fall short, we can also remove the accreditation. We may decide on this course of action for several reasons. These could be related to concerns around the quality of the data sources used to produce the statistics, where user need is not being met or where substantial changes to the data sources and methods require us to conduct a review to ensure the quality of the data is such that they continue to be applicable for their intended use.  

As we highlighted in our 2020 assessment, the response rate for the LFS has been steadily declining, and we called on ONS to address this issue and share any relevant information with users. ONS continued to develop its plans for the TLFS, which aims to address the shortcomings of the LFS. 

The long-standing response rate challenges facing the LFS were exacerbated by the COVID-19 pandemic. These issues then became acute when the boost to enable pandemic operations was removed in July 2023. Following this, ONS had to suspend publication of its estimates of UK employment, unemployment and economic inactivity based on LFS data. 

We removed the accreditation from LFS-based estimates and datasets in November 2023. We have also removed the accreditation from other outputs based on data from the APS. The APS is based on responses to wave 1 and wave 5 of the LFS plus a boost sample. 

Monitoring ONS’s work to improve the LFS

When ONS reintroduced LFS-based labour market statistics in February 2024, we carried out a short review of these statistics. In August 2024, we carried out a follow-up review to check the progress made against the requirements set out in our initial report.  

We identified four outstanding requirements, which focus on: 

  • communicating updates on both the LFS and TLFS in one place that users can easily access 
  • improved communication around the uncertainty in the data and what this means for the use of these data 
  • the publication of more-detailed information about the principles and quality criteria that ONS will consider in making further LFS improvements and the transition to the TLFS 
  • the publication of more-detailed information about ONS’s plans for improving the LFS from now until the transition to the TLFS and for transitioning to the TLFS  

We will continue to closely monitor ONS’s work to improve the LFS and plan to report on progress against these requirements next year. 

Reviewing ONS’s work to transform the LFS 

Over the last few years, ONS has been developing a different version of the LFS using an online-first multimode approach (that is, online first, followed by telephone and use of face-to-face interviewers to encourage households to participate). 

Recognising the significance of these statistics for government and economic decision-making, rather than wait to review the final statistical outputs, we have carried out regulatory work throughout their development. The aim of this work is to share early regulatory insights to help ONS in ensuring the new survey meets the standards of Trustworthiness, Quality and Value, in line with the Code of Practice for Statistics.  

We have carried out our review in phases, with each focused on the most relevant elements of the Code. The aim is to assess the statistics produced from the survey against all parts of the Code once the transformed survey is fully implemented.  

The first phase (which started in April 2022) focused on the design and development work ONS had planned before transitioning to the new survey approach. We published our initial findings in November 2022. In July 2023, we published an updated letter and progress report following phase two of our review. We are now entering the third phase of this work, which focuses on ONS’s engagement with users, its communication about its planned work and how it is assessing the quality and readiness of its transformed survey. 

We plan to publish the outcome of phase three of our review of ONS’s LFS transformation in early 2025. 

What’s next? 

A key theme throughout our regulatory work on the LFS and TLFS has been the need for improved communication and clarity, specifically around plans, sources of uncertainty and quality criteria for the transformed LFS. We also called on ONS to identify what lessons can be learnt from the LFS to more effectively and transparently manage and pre-empt quality issues in the future. So, we were pleased to see the comprehensive communication of ONS’s plans and activities in its letter to the Treasury Select Committee and in the update it published on 3 December. We also welcome its engagement through the stakeholder panel and the external methodological input it has sought from Professor Ray Chambers and Professor James Brown.  

We recognise there is still more to do. We will continue urging ONS to provide regular, clear, and comprehensive updates for users, as well as to seek challenge and input from key users and experts to ensure the future production of statistics that meet user needs. 

We are also working to understand to what extent response issues are impacting other household surveys used across the statistics landscape. We have asked ONS to consider whether the issues, concerns, and lessons it has learnt from the LFS apply to its other surveys. 

We will also carry out our own lessons learnt review, focusing on how we can best apply the Code of Practice as a tool to support transformation activities.

The Power of Conversation

In our latest blog, Head of OSR Ed Humpherson discusses our consultation on a revised Code of Practice, which  is open for comments until 14 February 2025. Read more about the consultation and how to have your say here. 

I have been asking myself why it is only now that I am writing a blog on our consultation on a revised Code of Practice, several weeks after its launch.

The consultation is big news for OSR and for the UK statistical system: the Code is our foundational set of principles, our conceptual framework, our guiding light. And it’s not as if we are proposing some mere tidying-up measures, the sort of pruning and weeding that a good gardener does to maintain their garden. We are proposing some significant landscaping changes – particularly to the structure and presentation of the Code.

Perhaps the answer comes down to my observation that most endeavours in the world of statistical regulation depend on, and are enriched by, conversation. OSR’s best work – our most impactful reports and interventions – is effective because of our engagement and interaction with users of statistics, both expert and not, and with the people who produce the statistics.

To give two examples: first, our annual state of the statistics system report is not dreamt up by us in a meeting room; it builds on a whole series of conversations across the statistical system, both with users and producers. Second, our assessments of individual statistics draw heavily on engagement with users; take a look at our recent assessment of ONS’s Price Index of Private Rents to see this in action.

Launching a consultation is not an end point in itself. It is an invitation to other people to share their thoughts, reflections and criticisms.

Moreover, the part of the work I particularly enjoy is not the sense of achievement on the day of publication. It’s hearing all the subsequent reactions and comments, and joining in the discussions that ensue.

That’s why I was so happy last week to participate in a joint event between OSR and the Royal Statistical Society (RSS) to discuss the new proposed Code. We heard a range of interesting and thought-provoking reactions, such as those from Paul Allin, Honorary Officer of the RSS for National Statistics, on the importance of recognising the public role of statistics; from Ken Roy, an independent researcher and former head of profession in a government department, who highlighted that the Code is the glue that holds together the large and complex UK statistics system; and from Deana Leadbeter, Chair of the RSS Health Statistics User Group, who welcomed the ambition of a more digestible Code for a wider audience. And we had some excellent questions from the audience on topics ranging from the limits to trustworthiness (from a colleague in the Hungarian national statistical institute) to the importance of simplicity.

These productive conversations are why I’m looking forward to the debates and dialogues around the new Code in the coming months – including those with the Market Research Society and the Forum of Statistics User Groups.

I want to hear people’s reactions to the new Code, including their views on:

And I want to hear a wide range of other thoughts – not just about the things that we want to highlight, like those three bullets above – but the things we have missed.

This emphasis on engagement and conversation is not only a core value for OSR. It’s also central to the Code of Practice itself. The new Code that we are proposing sets even clearer and firmer requirements for statistics producers in how they should engage with their users and transparently communicate how they have produced their statistics, and what their statistics do (and don’t) mean.

So, back to the question at hand: why didn’t I write this blog until now? It’s this: for me, the day the consultation is published is not always the best day to publish a blog. Instead, it can be better to wait until we’ve started to hear some views. Or, to put it simply: communication shouldn’t be about broadcasting a fixed view. Instead, it’s all about the power of conversation.

Read more about the Code consultation and how to have your say here. 

[1] What is it with me and gardens? I used to do a presentation all about walled gardens – how official statistics can’t be a walled garden, pristine but closed off from the world. They need to be open and accessible. Now, as then, I reach for a garden metaphor. It can’t be that I use these gardening analogies because I myself am an adept and successful gardener. I mean, you should just look at my own garden to realise that.

 

The Power of Public Engagement in Shaping Our Work 

In our latest blog post, OSR Research Officer Nick discusses how public engagement shapes our work, specifically in relation to our research project about how individual members of the public use official statistics to make decisions that relate to their personal lives. This OSR research is delivered by the Policy Institute at King’s College London and Behavioural Insights Team and will be published in January (as discussed in a previous blog, “How do we use statistics in everyday life?”). 

Here at the Office for Statistics Regulation (OSR), we are increasingly recognising that involving the public in our work is crucial for ensuring statistics truly serve the public good. Because of that, we have begun exploring public engagement.  

Public engagement is important and impactful  

For us, public engagement is the process of involving and collaborating with those outside of government and statistics to share information, gather input and foster meaningful dialogue on issues that affect them. Public engagement is about more than just asking people for their opinions. It’s about making sure that the work we do is grounded in real experiences and needs. By involving the public, we can build on our own technical expertise to make sure our work is meaningful and useful to the very people we aim to serve. 

OSR has begun engaging the public on our research 

Recently, we spoke to a small group of people to find out what they thought of different aspects of our current research project exploring how people use statistics to make personal decisions. These discussions have shown us just how valuable public engagement can be in shaping our research to be more relevant and acceptable to the public. 

In our public engagement sessions, we invited members of the public to share their thoughts on different aspects of our research proposal. We met with them virtually multiple times over the course of the projects, so that we could tell them how their advice had helped us and continue to seek their views. This included discussing how they felt about the life decisions we wanted to explore in our research, such as choosing a school for a child. We also asked for their feedback on survey questions we plan to use in the next stage of our research. While this was a light-touch approach to public engagement (a small group that met online for only an hour each time), we still got a lot from these discussions.  

We found this approach very valuable 

Public engagement is a powerful tool that enriches our work and fosters a collaborative spirit between OSR and the public. While this is relatively new for OSR, our recent sessions have demonstrated the real value of this approach. From this experience of public engagement, we have three main reflections: 

  1. People wanted to be part of these discussions 
  2. Public contributors are full of useful suggestions 
  3. It is important to seek diverse perspectives 

People wanted to be part of these discussions  

Discussions about statistics and research have the potential to be dry. However, when we spoke to members of the public, they were enthusiastic about the research and appreciated the opportunity to contribute. For example, one of our public contributors said: 

“If someone said to me, you are going to be involved in some statistics research, it seems like whether I will be able to do that? As a lay member of  the public, maybe I don’t know?.. but it is easy… I would like to be involved in future opportunities.”

This quote shows how important it is to give people the opportunity to be involved in these types of discussions. 

Public contributors are full of useful suggestions 

Attendees provided valuable feedback on various ways to improve our research, which has already informed several key decisions in our project. For example, we originally planned to ask people about whether or not they used official statistics in choosing which school to send their child to. However, public contributors raised that not everyone actually gets to choose a school for their child: 

“You might be able to have a parent that will look and say this is the school I want my child to go to based on their needs, based on the statistics. But the actual chances of them getting that school is very slim. So there may be people that feel that, yes, I can look the statistics, but then I have an inability to choose.” 

Because of this advice, we changed the decision to be about which school to apply to, rather than which school to send a child to. This type of change helped our research be more relevant to members of the public. 

It is important to seek diverse perspectives  

Engaging with a diverse group of attendees allowed us to gather a wide range of viewpoints. For example, we heard from people whose first language was not English on how the statistics in the research could best be presented in a way that they could understand:  

“[Statistics are] in the English and I have language barrier, so how can I like utilise that… In the bar diagram it’s ‘OK this bar is high and this bar is low’ so it’s easy to understand.” 

This perspective led to sharing visual aids with participants in the first stage of our research rather than solely presenting a traditional statistical bulletin made up of prose. Doing so made our research more accessible to a broader audience, and allowed a wider range of participants to engage meaningfully. 

We plan to use public engagement more going forward 

The success of these public engagement sessions has reinforced our commitment to involving the public in OSR’s work. As part of this commitment, we are now part of the Public Engagement in Data Research Initiative (PEDRI). In addition, future research projects at OSR will build on the feedback received in this project, and we hope to undertake public engagement in key projects beyond research as well. In doing so, we will be aligning with our view that serving the public good means treating official statistics as public assets; this involves allowing the public to understand and engage with our work. Through public engagement about our work at OSR, we can ensure it is more transparent, trustworthy and responsive to the needs of the public. 

If you would like to learn more or even become a public contributor yourself, please contact us at research.function@statistics.gov.uk. We look forward to hearing from you.  

 

Just three words – the birth of a code

In our latest blog, Penny Babb, OSR’s Head of Policy and Standards, speaks about her experience with the Kenya National Bureau of Statistics in developing the Kenya Statistics Code of Practice

I am a self-confessed code-geek. I love talking about the Code of Practice for Statistics! I may be a tad boring at parties, but I have a passion for the Code that is hard to miss (and I hope is infectious).  

What better then, than to get together with a group of people, away from the distractions of work (or mostly), to talk about codes for a week! There truly was no escape for my audience as we met on the shores of Lake Naivasha in Kenya.  

It was such a privilege to work with the Director General and statisticians of the Kenya National Bureau of Statistics (KNBS), to guide them through the preparations of their own code of practice. This collaboration was part of the strategic partnership between the UK Statistics Authority (UKSA) and KNBS. It wasn’t about the UK Code, but an opportunity to share our experiences and to support KNBS as they wrote their first code – finding the message that they wished to share with their organisations and analysts across the statistical system to inspire good practice. KNBS is committed to ensuring official statistics remain a valuable resource for all stakeholders in the statistical system in Kenya and to demonstrating international best practice. It sees the code as being central to achieving these goals.  

One of the lessons I took from being involved in developing the second edition of the UK Code was the power of explaining it simply, in just three words. And as we have seen in our own recent review of the Code, our three words – trustworthiness, quality and value – are extremely effective in helping people producing and using official statistics to understand what matters. My goal was to help my Kenyan colleagues find their own three (or more!) words. 

The dedication and skill of the team of statisticians and leaders at KNBS was evident and central to completing their task of writing their code. The Kenyan code had to be one that was meaningful and inspiring in their context – not just to staff in KNBS, the national statistical institute for Kenya, but for all other organisations in their statistical system.  

In two week-long workshops, Caroline Wangeci (the then ONS strategic adviser to KNBS) and I led sessions to develop the Kenyan code of practice. Our codes are both grounded on the UN Fundamental Principles for Official Statistics, but each speaks to specific needs within our countries. 

The commitment and dedication of the Kenyan team was inspiring to witness and humbling to assist. Writing a code of practice to set standards for statistical production and dissemination is challenging. But doing so in concentrated bursts of effort, away from day-to-day responsibilities, allowed the team focused time to think through the key values and messages needed to inspire those involved in producing official statistics. After having spent a day diving deep into code matters, many of the participants then spent further hours catching up on their work, ensuring the smooth running of their statistical teams.  

KNBS has published the Kenya Statistics Code of Practice (KeSCoP). It is a remarkable achievement in its clarity and mission. As the Code highlights,  

“KeSCoP is anchored on three pillars, namely, Quality, Trustworthiness and Progressiveness.  Each pillar contains values and commitments that producers of statistics should commit to when producing and disseminating statistics.” [page 1, KeSCoP] 

KeSCoP is drawn from the principles and requirements of the Kenya Statistical Quality Assurance Framework (KeSQAF). The Code helps the drive for continuous improvements in statistical practice.  

“Compliance with KeSCoP gives producers of official statistics and the users confidence that published statistics are of high quality, have public trust and are produced by institutions and people that are progressive.” [page 2, KeSCoP] 

It echoes our UK Code but in a truly Kenyan way – they have defined their concepts and practices in the ways that speak to their application of the UN Fundamental Principles for Official Statistics.  

Quality has values for:

relevance; timeliness and punctuality; accuracy; coverage/comprehensiveness; methodological soundness; professional skills and competencies; and data sources

Trustworthiness has values for:

impartiality and equal access; confidentiality; accountability and transparency; and integrity and credibility  

Progressiveness has values for:

innovation; continuous improvement; user focus; and adequate funding

I love the clarity of each value and commitment, providing a clear steer for any producer on what it looks like to deliver official statistics to the required standard, with associated indicators or measures of success.  

We have much that we can learn from KeSCoP as we refresh the UK Code and seek to ensure its clear communication and accessibility. 

To find out more about UKSA’s international development work, see ONS’s published strategic plan for 2022–2025.