What is Levelling Up and how will its success be measured?

‘Levelling Up’ is now a term used daily in the media – but what does it mean and how are we, as the statistics watchdog, going to monitor its impact? Statistics Regulator Ben Bohane discusses…

Not long before I joined OSR as a Regulator, the Conservative Party made ‘Levelling Up’ a key part of their 2019 election manifesto. It focused on challenging and changing the geographical inequality in the UK, through investment and new infrastructure to allow ‘everyone the opportunity to flourish’.

Prior to working at OSR, I taught Economics to young people as a Secondary School teacher. I found teaching young people how changes in the economy and government spending might impact on their lives – really rewarding. I think back to those young people now – do they understand what Levelling Up is amongst the media hype? How will the proposals that are outlined in the Levelling Up White Paper impact their lives and futures?

If I were to explain it to my students now, I would describe Levelling Up as a plan to eradicate regional disparities in the UK, raise living standards and provide greater opportunities to people, in communities and areas that have so far not had the success of more prosperous parts of the country.

But with confusion over the concept really means – how can we measure something for which the success means different things to different people? Back in March, OSR’s Director Ed Humpherson wrote about ‘Why I Love Evaluation’ stating that evaluation “provides evidence of what works; it supports good policy; it builds the skills and reputation of analysts; it helps scrutiny.” Ongoing evaluation of Levelling Up will be key to its success.

In OSR our focus is on ensuring existing statistics that can be used to measure the success of government policy are of sufficient trustworthiness, quality and provide public value. But also, that statistics are available in the first place. As the government highlights in the White Paper, many of the metrics that will be used to measure the success of Levelling Up are either not yet available or of insufficient quality. The clarity of what’s being measured is important if people want to track progress through data.

In OSR we’ve already been working on public interest in regional disparities and fighting for a statistical system that is more responsive to regional and local demands. We have:

Our Business Plan highlights that we have seen a growing public expectation that decisions affecting all aspects of our lives will be evidenced by trustworthy and accessible statistics. Over the coming months and years, we will continue to review new statistics and data sources from the Department for Levelling Up, Housing and Communities, Office for National Statistics and other data providers as they are developed to ensure that evidence and evaluation is at the forefront of pushing the plans forward.

Our regulatory programme for this year focuses on projects that will improve public understanding of the issues, current and emerging, that people want to be sighted on. As the statistics regulator, reviewing the statistics used in Levelling Up, we will be tracking the implementation of the GSS Sub National Data strategy and new tools such as the ONS Sub National Indicators Explorer, ensuring statistics are the best quality they can be and clearly focussed on measuring the outlined Levelling Up missions.

Statistics supported by clear analysis and evaluation will provide the evidence to measure the impacts, successes and failures of Levelling Up – and any future government policies to address regional disparities and improve people’s lives. As the government implements policies to address regional inequalities – and businesses and households respond – we will focus on ensuring that the statistics both accurately measure and live up to this ambitious long-term strategy. It is important to me as a statistics regulator that we do this. After all, the vision of Levelling Up is so important to the futures of those young people I used to teach.


Appendix 1: Background foundation work on surveys that are used to produce economic statistics

ONS Purchases Survey statistics  (December 2019):

ONS reintroduce the survey following the National Statistics Quality Review of National Accounts to provide better information about purchasing patterns by business.

Our review found that the quality of outputs from the survey is still being improved, which reflected ONS’s own narrative that it would normally be several years before a new survey was producing statistics that could be used with confidence. It also reflected that the Annual Purchases Survey aims to collect variables that do not naturally fit with many businesses’ operational models. Our report noted the discrepancy between estimates of intermediate consumption derived from the Annual Purchases Survey and the Annual Business Survey. We said that it is an essential part of demonstrating that the quality of the statistics meets users’ needs that these differences are understood, explained well, and are used to further improve the statistics.

ONS UK Business Demography statistics (October 2020)

We reviewed ONS Business Demography statistics because we felt they should be considered key economic indicators. They are not regarded as such because they are not as good or as useful as they should be. The ONS’s business register – the Inter-Departmental Business Register (IDBR) – holds a wealth of data on the UK’s business population, some of which are used to produce business demography statistics. The remainder of which remain a largely untapped resource. In response to the COVID pandemic, ONS introduced a weekly indicator of business births and deaths and introduced a quarterly series of experimental business demography statistics. These innovations presented a platform for further development of the statistics. However some required improvements to the statistics rely on significant investment and we said that work to develop ONS’s business register should urgently be re-introduced to ensure that users’ needs for business population statistics are met. In our review we made several short-term recommendations for ONS:

  • demonstrate progress in understanding the access difficulties users are experiencing when using and linking IDBR data with data
  • publish its plans for publishing more timely business demography statistics, and its plans for developing the recently introduced quarterly experimental statistics
  • publish a narrative covering what ONS already knows about the range of key data quality issues, building on the supporting quality information provided with the new quarterly experimental statistics
  • publish its plans to restart and resource work to develop its business register

We also said in the longer term, ONS should publish a plan which includes specific actions, deliverables and a timetable that explains how it will address the improvements identified in the report, including plans for reviewing the funding of the Statistical Business Register.

ONS Annual Business Survey statistics (September 2021)

We reviewed ONS ABS and found that the significant time delay on the publication of ABS data means that the data are not always used to measure the ongoing impacts of structural and cyclical changes to the UK economy. As a result, ABS data are not fully meeting users’ needs for timely and detailed data on business performance.

We found ONS focus and priority on transforming short-term surveys means there has been a lack of investment in finance, staff, and systems and so ABS data have been unable to keep up with changing demands on their use. The lack of investment has curtailed ONS’s efforts to improve the detail and timeliness of ABS data.

We found a lack of investment has been a common theme of OSR’s recent assessments of ONS’s structural economic surveys and statistics. We strongly urged ONS to revisit the investment needs of these outputs, to ensure structural economic data are available to assess, for example, the ongoing impact of the economic shocks of Brexit and the pandemic.

Appendix 2: OSR work on regional statistics and Levelling Up

ONS Statistics on Regional Gross Value Added (August 2017)

“Many of the R-GVA users that we spoke to cited poor timeliness as a limitation of these statistics” and “that unless the R-GVA statisticians find new sources that provide the same level of detailed information more quickly than the current sources (which they indicated to us is unlikely in the short term), the timeliness of these statistics is unlikely to change significantly”.

“ONS might do more to bring out the differences between the regions through the proportions of people in the region who are economically inactive, which can affect the GVA per head statistics and the impact of commuting on the statistics” and requested ONS “to work with its national and regional stakeholders to bolster the statistical services such as information, advice and guidance available to provide even greater insight in sub-regions (particularly new city-regions) and in preparing contextual information to aid regional and country media in interpreting the statistics”.

ONS Statistics on Regional Gross Value Added (Phase Two) (June 2018)

We asked ONS to make further improvements, for example, “investigate whether improvements in the quality of deflators by adopting regional price statistics could be achieved technically and cost-effectively taking account of expected use of the statistics and user need”. We also asked ONS to “review the best way of making quality metrics both more useable to a less expert audience and more accessible generally”.

HM Treasury Statistics on Government Spending: Country and Regional Analysis (May 2019)

We asked HM Treasury to:

  • collaborate with producers of other public finance statistics and with analysts in the countries and regions to seek views, update their understanding of users’ needs to better support the use of these statistics
  • communicate effectively with the widest possible audience to increase awareness of the statistics and data
  • present CRA data in a more engaging way that supports and promotes use by all types of users and those with interests in spending at programme and service levels (sub functional levels)
  • test the strength of user need for CRA on a ‘where-benefits’ basis, examine the feasibility of collecting data on this basis and the trade-off between enhanced functionality and increased burden on data suppliers
  • provide a clear and comprehensive account in each annual CRA publication to allocation methods, including the inclusion of links to published documents about allocation methods in respect to all ongoing major project spending
  • ensure that users are provided with appropriate insights about changes in the data. This should include helping users understand impacts on the CRA data and provide links, when applicable, to other output areas where information on Brexit impacts has already been published
  • establish a development programme for these statistics and periodically review that programme; be open about progress towards meeting priorities and objectives; and arrange for users and other stakeholders to be involved in prioritising statistical plans
  • strengthen its arrangements for reviewing requests to allow pre-release access to new people; review the current list of those with pre-release access for CRA, with a view to minimising the numbers of individuals included and inform the Authority of the justification for each inclusion

ONS Experimental statistics on Regional Household Final Consumption Expenditure (HFCE) (January 2021)

We highlighted the potential of HFCE estimates as a highly important component in fully understanding regional economies. Prior to this there were no regional estimates of the expenditure measure of GDP, except in Scotland, a topic we previously highlighted in our 2020 submission to the Treasury Select Committee’s inquiry into Regional Imbalances.

DLUHC Levelling Up Fund prospectus (March 2021)

The Levelling Up Fund prospectus included a list of local authorities by priority category.

However, initially no description of the methodology used was attached to enable users to understand how local authorities were allocated to priorities areas. A week later a DLUHC published a methodology document, but it was still not possible to recreate the full dataset used to allocate local authorities to priorities areas.

We wrote to DLUHC publicly highlighting our concerns about the transparency of data related to the Levelling Up Fund and we requested DLUHC publish data that supported the allocation of priorities areas to enhance public confidence in the decisions that were being made.

As a result, DLUHC published the Levelling Up Fund: prioritisation of places model, which showed all the steps that were taken when using data to assign Local Authorities in England, Scotland and Wales to categories 1, 2 and 3. The spreadsheet included a “data and input construction” tab which included links to the source data with explanations of the source and why it was chosen.

ONS Foreign Direct Investment (FDI)  Statistics and DIT Inward Investment Statistics (April 2021)

As a result of our review new questions were added to the quarterly and annual FDI surveys, to collect more-granular data on sub-national FDI and ONS is now publishing experimental UK sub-national FDI statistics.

NISRA BESES statistics (December 2021)

As a result of our review, NISRA will be publishing more timely imports data and has developed an interactive dashboard that provides more-granular monthly international trade data on products.

ONS Income Estimates for Small Areas statistics (January 2022)

We suggested how further value could be added by ONS understanding the needs of current non-users who require income estimates at lower levels of geography.

Users’ want to be able to aggregate estimates for lower-level super output areas into bespoke geographies, the estimates are given for middle-layer super output areas which are too large for users’ needs.

DLUHC planning applications in England statistics and at the same time Homes England Housing Statistics (March 2022)

We felt at the time it was likely that planning performance and planning reforms will in some part be included in new Levelling up legislation, given its assumed focus on local area development.

LA planning application performance at the time had also been identified as a priority departmental outcomes metric in the 2021 Spending Review.

We advised further developments to the statistics. One of these developments included sub-national commentary, which should be introduced to help explore, for example, trends in planning to support regeneration in the 20 English towns and cities prioritised in the Levelling Up white paper.

We found the statistics could be further enhanced if Homes England were to publish information about aspects of quality, for example, limitations of data sources, quality assurance checks carried out by data suppliers, and the team’s assessment of data quality against our quality assurance of administrative data (QAAD) matrix

We also asked Homes England to consider how any uncertainty in the statistics might be more clearly communicated to users, as the latest data are provisional and subject to revision.

Finally, we suggested further insight and context should be added by enhancing the narrative and analysis provided for users who wish to explore the topic further.

Appendix 3: Treasury Committee evidence

2019 response

Key point

There is a range of official statistics on regional economic performance. They should be considered alongside other forms of data published by Government and others.

What we said

All data, whether classified as official statistics or not, should seek to adhere to high standards of trustworthiness, quality and value (which we describe as voluntary adoption of the Code of Practice’s pillars).

Key point

There are some limitations to the current data sources, both in terms of data gaps and in terms of quality.

What we said

In our written evidence referring to regional economic data, we highlighted “the quality of regional data is affected by the granularity that the data sources can provide, and/or the timeliness of the data provision”. Regional data is more volatile than national estimates and there are significant challenges in forming regional estimates of GDP.

We said “In arriving at aggregate estimates, statisticians often combine both administrative and survey data sources….and then disaggregate to provide regional breakdowns (a top-down approach). Survey data is often limited in its depth: for example, the data used to compile R-GVA can become stretched at lower geographies, becoming increasingly volatile as it is disaggregated further.”

Key point

There is a significant use of modelled data, which apportions national data to regions using formulae, rather than directly observed data, which would be gathered at the local level.

What we said

“During our regulatory work, we received feedback from users of regional and sub-regional economic data expressing concern that they can’t tell whether the data they are using is based on observed economic behaviour or come from modelled estimates. They view data based on observed estimates as more reliable than modelled estimates”.

At our request, the ONS conducted research into how much data measuring economic growth are directly observed at a regional level and collected in a way that can be immediately and wholly assigned to a single region, and how much data are modelled to provide regional estimates.

Key point

It may be worth considering a network of regional statistical observatories, akin to the Bank of England’s regional agents, that can help provide ONS and others with better insight into regional economic issues.

What we said

We wanted to highlight the benefits of a presence outside the offices of London, Newport and Titchfield – both to better understand the dynamic of regional economies, and to be closer to users with a regional focus (like combined mayoral authorities).

2020 response

“Developments [in regional statistics] will be enabled by better access to administrative data, where ONS can provide enhanced (ideally flexible) geographies with more use of direct estimation”.

“Regional performance information published by the UK Government can be found in some departmental annual reports and accounts but is not summarised in any compendium”.

We laid out several important conditions for publishing regional economic forecasts that could be adopted to help people make judgements about the UK and regional economy.

One of these conditions was: “It would be important to communicate the uncertainties associated with any regional GVA forecasts. For example, there are deficiencies in historical GVA data. Forecasts will only be as good as the data they rely on”.

How OSR secures change

Why do organisations do the things they do? Strip away all the language of business plans and objectives and strategies, and what it often boils down to is wanting to achieve some kind of positive impact.

It’s important to remember that as we launch our latest business plan for 2022/23. In this blog, rather than highlight specific outputs and priorities, I want to talk more generally about how OSR, as a regulator, secures positive change.

By change, we mean regulatory work or interventions that ensure or enhance statistics serving the public good. There are basically two ways in which our work leads to statistics serving the public good. Our work can:

  • secure a positive change in the way statistics are produced and/or presented; and/or
  • make a direct contribution to public confidence and understanding.

OSR clearly does secure impact. In response to our reports, producers of statistics make changes and improvements to their statistics and other data. Statistics producers also use the presence of OSR in internal debates as a way of arguing for (or against) changes – so that OSR casts a protective shadow around analytical work. OSR can also secure changes to how Ministers and others present data. And OSR also achieves impact through getting departments to publish previously unavailable data. In all these ways, then, OSR secures impact, in the sense of statistics serving the public good to a greater extent.

In terms of formal statutory powers, the main lever is the statutory power to confer the National Statistics designation. This in effect is a way of signalling regulatory judgement. The regulatory role is to assess, i.e. to review and form a judgement and on the basis of the judgement, that National Statistics designation is awarded. We have recently been reviewing the National Statistics designation itself.

A further power in the Statistics and Registration Service Act is the power to report our opinion. Under section 8 of the Act, we are expected to monitor the production and publication of official statistics and report any concerns about the quality of any official statistics, good practice in relation to any official statistics, or the comprehensiveness of any official statistics.

These statutory powers do not create influence and drive change by themselves. We need to be effective in how we wield them. We have to supplement them with a powerful vision, good judgement, and effective communication.

The power of ideas

The most significant source of influence and impact is the power of the ideas that underpin the Code of Practice for Statistics. The Code is built on the idea that statistics should command public confidence. It is not enough for them to be good numbers: collected well, appropriately calculated. They must have attributes of trustworthiness, quality and value.

The power of these ideas comes from two sources. First, they are coherent, and in the Code of Practice, are broken down into a series of increasingly granular components – so the ideas are easy for producers to engage with and implement. Second, they have enormous normative power – in other words, trustworthiness, quality and value represent norms that both statisticians and, senior staff want to be seen to adhere to, and wider users want to see upheld.

These powerful, compelling ideas represent, then, something that people want to buy into and participate in. A huge amount of OSR’s impact happens when OSR is not even directly involved – by the day-to-day work of statisticians seeking to live up to these ideas and the vision of public good that they embody.

Judgements

OSR’s work begins with the ideas embodied in the Code, which we advocate energetically, including through our crucial policy and standards function. The core work for OSR actually consists of making judgements about trustworthiness, quality and value, in multiple ways:

  • our assessments of individual sets of statistics, where we review statistics, form judgements, and present those judgements and associated requirements to producers and then publicly – either through in-depth assessment reports, or by quicker-turnaround reviews of compliance;
  • our systemic reviews, which address issues which cut across broader groups of statistics, and which often focus on how of we statistics provide public value, including highlighting gaps in meeting user needs;
  • our casework, where we make judgements about the role statistics play in public debate – whether there are issues with how they are used, or how they have been produced, which impact on public debate; and
  • through our broader influencing work, including our policy work, and our research and insight work streams.

These judgements are crucial. Our ability to move fluidly using different combinations of our regulatory tools is important to securing impact. It allows us to follow up where the most material change is required and extend our pressure – and support – for change.

We are able to make judgements primarily through the capability of OSR’s people. Their capability is strong, and we depend on their insight, analysis, judgement and ability to manage a range of external relationships.

Communication and reach

It is not enough for us to make good judgements. We need to make sure that any actions are implemented – in effect, that our judgements radiate out into the world and lead to change.

There are three main channels for achieving this reach:

  • Relationships with producers: our relations with producers are crucial. Heads of Profession and the lead statisticians on individual outputs are the key people for making improvements; their buy-in is crucial.
  • Voice and visibility: having a public voice magnifies our impact. It ensures that policymakers are aware of what we do and understand that our interventions can generate media impact.
  • Wider partnerships: while our direct relationships with producers, and our public voice, can also create sufficient leverage for change, we also draw on wider partnerships. For example, credible external bodies like the RSS and Full Fact can endorse and promote our messages – so that producers face a coalition of actors, including OSR, that are pushing for change.

And we put a lot of emphasis on the views, experiences and perspectives of users of statistics. Almost all our work involves engaging with users, finding out what they think, and seeking to ensure producers focus on their needs.

In that spirit, we’d be very keen to get reactions on our own business plan – from all types of users of statistics, and also from statistics producers.

Conclusion 

Business plans should not simply be a list of tasks. It is also important to be clear on how an organisation delivers, how individual projects and priorities help achieve a positive impact. In OSR’s case, achieving this impact involves the power of ideas, good judgement and effective reach.

With this clarity around impact, our business plan (and work programme) comes to life: more than just a set of projects, it’s a statement of ambition, a statement of change.

But the business plan is also not set in stone. We are flexible and willing to adapt to emerging issues. So if there are other areas where we should focus, or other ways we can make a positive difference, we’d really welcome your feedback.

Guest Blog: National Statistics – The Road to Accreditation

The Office for Statistics Regulation has today designated the Family Practitioner Services statistics in Northern Ireland as National Statistics.

In this guest blog, Martin Mayock, Head of the Family Practitioner Services (FPS) Information Unit in Northern Ireland’s Health and Social Care (HSC) Business Services Organisation (BSO), and a Senior Statistician in the Northern Ireland Statistics and Research Agency (NISRA), discusses his experience of the assessment process – which wasn’t, in fact, as daunting as his team first thought!

By way of background, Information Unit had always produced a wealth of information across primary care: medical, ophthalmic, dental and pharmaceutical services for internal use by our health policy and operational colleagues. We even, on occasion, publicly released various ad-hoc reports and tables but were a long way short of being a Code-compliant organisation. This despite BSO being a legally specified producer of Official Statistics (OS) since 2012. It’s not that we didn’t recognise the importance of compliance, it was simply a matter of resources and carving out sufficient time from our day-to-day analytical project support, to progress our OS aspirations. The key was good planning and accepting that this would not be done overnight, taking whatever time was necessary to update our processes, plug identified gaps and develop documentation. Of course, we needed the backing of senior management if we were going to be “distracted” from the day job so we first had to sell the benefits. 

Our first milestone was to release an Annual Compendium in 2018, covering all of our key primary care areas, complying with the Code as far as possible. A quarterly series soon followed and by our third year, responding to user demand, we were ready to split the compendium into separate service areas with a dedicated team responsible for each. User engagement was a key component of the work programme with readership surveys, supplemented by targeted stakeholder interviews, allowing each release to evolve in a way most beneficial to its user base. The fact that our teams were both users, through our ongoing project work, and producers of the data helped enormously in improving its quality and offering guidance on its use. 

By our fourth year, and following two successive releases of our service specific publications, we were finally ready to push the button and subject our outputs to the all seeing eye of the OSR. Yes, it had taken a few years to get to this point, albeit from a fairly low base, but we now felt confident that our processes were in order and we had a good story to tell. The invite was duly issued in November ‘21 and we had our assessment initiation meeting with the OSR team, headed by Dr Anna Price, in January this year. Everything was clearly explained to us in terms of how the process would run and what would be expected from us by way of evidence. The OSR team, comprising Anna, Jo Mulligan and Sarah Whitehead, were really open and friendly (surely a ruse to get us to lower our guard lol) and keen to help with various initiatives that we were planning such as the introduction of Reproducible Analytical Pipelines into our production process. It all seemed reasonably straightforward and, certainly as a veteran of 4 previous assessment campaigns in other NI Departments, much less formal and bureaucratic than I had remembered – can’t last, I thought! 

Roll forward to February, and we had our follow up meeting with the assessment team to discuss our submitted evidence but also, importantly, draw upon information the team had gleaned from our users. This meeting involved all of our publication leads so, with virtual flak jackets donned, we braced ourselves for the inevitable onslaught. But, again, we were pleasantly surprised. The meeting was more like an interesting chat around our various processes, with helpful suggestions and resources offered which could further enhance our outputs. Of course, there were queries and clarifications sought, some of which were followed up in writing in the weeks that followed, but these were conveyed in a constructive way and the different perspective offered us an opportunity to highlight aspects of our process that we’d overlooked in our initial evidence submission.

We received first sight of our draft assessment report the following month and I admit to opening the document with a feeling of slight trepidation. I’d had the impression that the team felt we were in reasonable shape from our meetings but there’s always requirements, I mean they have to find something, right? But no, I read the report twice, definitely no requirements! What it did contain was a succinct summary of how we matched up against the Code of Practice pillars of Trustworthiness, Quality and Value along with some helpful suggestions of where we could enhance our outputs against these. Several useful resources which we might find useful in this regard were also signposted. We had an opportunity to suggest any amendments and correct any factual inaccuracies, of which there were very few, and several weeks later we were notified that the report and recommendation to designate our outputs as National Statistics had been accepted by OSR Regulation Committee. Our journey was finally complete! Of course, it never really ends and we will need to continue to improve and innovate to ensure standards are maintained and the needs of ever more demanding users continue to be satisfied.

Before signing off, I thought I’d leave you with what I feel were the three most important factors in helping us achieve our designation so painlessly.

  • Invest time achieving proper buy-in from senior management within your organisation – you will need their support to allow you to spend time developing aspects of your processes that they may not immediately see as being important to their core business.
  • Prepare, Prepare, Prepare – Don’t rush to get your outputs assessed, wait until you are properly ready. We were also able to draw upon the support of our colleagues in the Northern Ireland Statistics and Research Agency (NISRA) who provided lots of good advice and resources, including other relevant assessment reports. The assessment focus can change over time and statisticians are constantly innovating. We can learn a lot from our peers. I’m not saying take 4 years but, if you invite an assessment too early, then you will leave yourself with a limited window, typically 3 months, in which to meet requirements. This could feel like a burden on top of your business as usual. Better to meet as many potential requirements ahead of time as possible.
  • Have similar outputs assessed as a batch – it might seem tempting to submit individual outputs for assessment in order to make the process more manageable or you may feel that some are more ready than others. However, there can be synergies between outputs and processes that make sense to consider together. We also included all of our publication leads when we met with the assessment team and this all helped deliver a more rounded and efficient assessment.

In my experience, the assessment process itself has definitely evolved for the better and feels more like a collaborative venture these days rather than a statistical audit. It definitely feels more light touch than previously and, although a lot of hard yards are still required to ready your outputs, it is great to see that your efforts will be recognised by the assessment team. 

The Family Practitioner Services statistics in Northern Ireland assessment report has been published today, less than six months since the initial invite for assessment. 

All-in-all we found it to be a very worthwhile and positive experience so if you are thinking of taking the plunge then go for it, you might just be pleasantly surprised! 

Time doesn’t stand still

In light of our first change to the Code of Practice, which relates to the release times of official statistics, and gives producers the opportunity to release statistics at times other 9.30am, where they and we judge that this clearly serves the public good, our latest blog sees Penny Babb – Head of Policy and Standards – look back over the first four years of the Code of Practice and consider its impact.

In some ways, time seems to have stood still over the last couple of years. But in other ways, I’m amazed it is only four years since we published Edition 2.0 of the Code of Practice for Statistics – how time has flown! 

As I contemplate what the impact of the Code has been since we launched it in February 2018, my abiding sense is that the heart of the Code has never been better understood or more comprehensively applied. I know I’m biased, much like a proud parent sharing the latest pics of their child’s exploits, as I was closely involved in writing the Code and am OSR’s lead on its guidance.  

I’m not sure there could have been a more profound test of the Code than statisticians needing to turn their work on its head and seemingly perform miracles – bringing new meaning to ‘vital statistics’. How easy it would have been to say in March 2020 that the ‘rule book’ needed to be ripped up as the world had changed.  

Instead, analysts in the UK had a firm guide that supported and enabled them to make new and challenging choices – what to stop, what to change. How to better speak, explain, reach out to the massive audience with an insatiable appetite for numbers, to make sense of the unknown. 

Underpinning their decisions were Trustworthiness, Quality and Value – the framework of our Code. For each dilemma faced, the answer lies in thinking TQV:  

  • What should I release – what information is needed right now to support decision making?  
  • How should I release it – how can you ensure that the public retains confidence in the statistics and the people releasing them?  
  • But there’s no time for ‘full’ QA – what information do you have confidence in that will not mislead? 

Ultimately the Code of Practice is not a rule book but a guidebook. 

Very little in the Code is a directive. One exception has been the practice setting out the time of release of official statistics as 9.30am (T3.6). Today we are changing this practice following our three-month consultation in autumn 2021 and conversations with producers, users and other stakeholders. With a focus on what is prime in enabling the statistics to best serve the public good, producers can now consider whether there is a need to release their official statistics at a time other than 9.30am.  

There is nothing special about this specific time of day. The main characteristic is that it is a standard time across all official statistics producers in the UK that helps ensure consistency and transparency in the release arrangements and protects the independence of statistics. This is an essential hallmark of official statistics, and a norm that we continue to expect and promote. 

However, as the pandemic clearly showed, there are some situations when a producer may sensibly wish to depart from the standard time. The new practice enables greater flexibility, but a release time should still depend on what best serves the public good.  

We will continue to protect against political interference and ensure the public can have confidence in the official statistics. The final decision on whether an alternative release time is used will be made by the Director General for Regulation – the head of the Office for Statistics Regulation. All applications will be listed in our new table – it will include rejections as well as cases where the application was approved. 

And there may be further changes to the Code of Practice to come this year. For example, our review of the National Statistics designation has already proposed changing the name of experimental statistics. So, watch this space for further developments. And if you have any reflections on the Code of Practice, please feel free to send them to regulation@statistics.gov.uk

Exploring the value of statistics for the public

In OSR (Office for Statistics Regulation), we have a vision that statistics should serve the public good. This vision cannot be achieved without understanding how the public view and use statistics, and how they feel about the organisations that produce them. One piece of evidence that helps us know whether our vision is being fulfilled is the Public Confidence in Official Statistics (PCOS) survey.

The PCOS survey, which is conducted independently on behalf of the UK Statistics Authority, is designed to capture public attitudes towards official statistics. It explores trust in official statistics in Britain, including how these statistics are produced and used, and it offers useful insights into whether the public value official statistics.

When assessing the value of statistics in OSR, two of the key factors we consider are relevance to users and accessibility. The findings from PCOS 2021, which have been published this week, give much cause for celebration on these measures, while also raising important questions to explore further in our Research Programme on the Public Good of Statistics.

Do official statistics offer relevant insights?

PCOS 2021 shows that more people are using statistics from ONS (Office for National Statistics) now compared to the last publication (PCOS 2018). In the 2018 publication of the PCOS, 24% of respondents able to express a view said they had used ONS statistics, but this has now increased to 36%. This increase may be due to more people directly accessing statistics to answer questions they have about the COVID-19 pandemic. In our Research Programme, we are interested in knowing more about this pattern of results and also understanding why most people are not directly accessing ONS statistics.

Are official statistics accessible?

PCOS 2021 asked respondents if they think official statistics are easy to find, and if they think official statistics are easy to understand. These questions were designed to capture how accessible official statistics are perceived to be by members of the public. Most respondents able to express a view (64%) agreed they are easy to find. This is an important finding because statistics should be equally available to all, without barriers to access. Most respondents able to express a view (67%) also agreed that statistics were easy to understand, suggesting that two thirds of respondents feel they can understand the statistics they want to.

However, respondents who were aged 65 or older were least likely to agree with these two statements. Statistics serving the public good means the widest possible usage of statistics, so this is an important finding to explore further to ensure that older respondents are able to engage with statistics they are interested in. In our Research Programme, we will work to identify what barriers might be causing this effect and whether there are other groups who feel the same way too.

The value of statistics

Considering how the value of statistics can be upheld, respondents in PCOS 2021 were asked to what extent they agree with the statement “it is important for there to be a body such as the UK Statistics Authority to speak out against the misuse of statistics”. The majority (96%) of respondents able to express a view agreed with this statement, with a similar number (94%) agreeing that it is important to have a body who can ensure that official statistics are produced free from political interference. While we are cautious about putting too much weight on these two questions in the survey, these findings may at the very least indicate the public value the independent production of statistics, as well as challenges to the misuse of statistics.

In conclusion, PCOS 2021 suggests that statistics are relevant and accessible to many members of the public, but there are still some who do not access statistics or consider them easy to find or easy to understand. While the findings of PCOS 2021 offer a wealth of important information, and demonstrate the value of official statistics, it is clear there are still a lot of questions to explore in our Research Programme. We will continue our work to understand what statistics serving the public good means in practice, guided by knowledge from PCOS 2021.

Thinking about Trustworthiness, Quality, and Value across Housing and Planning statistics

When I wrote my last blog just over two years ago, reflecting on the progress that had been made on improving the public value of housing and planning statistics in the UK, little did I know that we were about to enter a pandemic. Cue homeworking, home schooling and everything else that went with it.

In that original blog, I highlighted some of the areas that we would like to see addressed and some recommendations that still needed some work such as “filling gaps on UK private rental sector levels and land use; the need for cross-government work to improve coherence on housing conditions and the quality of UK house-building statistics; and further development of UK homelessness and rough sleeping statistics.”

During those two years and against a backdrop of disruption, statistics producers adapted to the challenges of homeworking and resources being redirected to support covid-related analysis. For example, the English Housing Survey, like many other face-to-face surveys, adapted to moving online and the team was also able to provide valuable insight into the effects of the pandemic on households through its household resilience survey. Producers started to collect and publish management information on the numbers of rough sleepers and those at risk of homelessness helped off the streets during lockdown. We also saw innovation in the introduction of data dashboards such as that from the Scottish Housing Regulator which included information on the impact of COVID-19 on social landlords and their tenants. 

Progress has also been made outside of pandemic-related work. For example, on the theme of private rents, ONS has recently announced further information on its redevelopment of private rental prices statistics, and Government Statistical Service (GSS) statisticians collaborated on this ONS report on Homelessness in the UK. 

The cross-GSS housing and planning working group published its latest work plan at the end of 2021, and it’s great to see some ambitious work planned which tie-in to our original recommendations. The work of the group also feeds well into ‘TQV’ – trustworthiness, quality, and value – the framework of our Code of Practice for Statistics.

Think Trustworthiness

We often encourage statistics producers to publicly share their work plans to demonstrate transparency and to foster wider and better engagement with users which is why we welcome this latest work plan for 2021/22 which includes summaries of progress to date. This is an achievement in itself given all the disruption to statistics producers and teams across the GSS due to the pandemic and we would like to congratulate all involved. 

It’s great that the working group has made an ongoing commitment to engage with users to help shape the work programme and development plans. It is so important for producers to listen to what users have to say and to reflect on the feedback, and that user engagement needs to be planned and collaborative.

Part of this ongoing user engagement includes continued engagement with users on what they would like to see as topics for the next coherence articles – the suggested priority areas being the private rental sector and energy efficiency – gaps that we identified ourselves through speaking to users.

Think Quality

In the Homelessness and Rough sleeping landscape we are really encouraged to see the plans set out around improving the quality of homelessness and rough sleeping data across the 4 UK countries, some of which echo our findings from our recent assessment of DLUHC’s statistics on Statutory Homelessness in England. In particular we are excited to hear that Welsh Government and DLUHC are continuing to publish their rough sleeping management information, with DLUHC’s latest Rough Sleeping snapshot including commentary on the separate management information collected since the start of the pandemic.

Also, data from the Census 2021 for England and Wales which is due to be published later this year has the potential to be used to provide further insights into the homeless populations or those at risk of homelessness.

Think Value

We are hoping that through ONS’ equalities and inclusion work, improvements to the inclusion of under-represented population groups in statistics, in particular homeless groups, will go some way to fill in gaps on the ‘hidden homeless’ such as sofa surfers, that official data do not currently capture. This is a theme that came up when we spoke to users as part of our assessment of the existing Statutory Homelessness in England statistics, and a group that some homeless charities and organisations have looked to measure. 

Through our regulatory work we are starting to see more statistical producers make use of Power BI to produce more accessible and interactive content. We have already publicly commended this through outputs from our regulatory work and welcome the news that DLUHC is rolling this initiative out further across more of their statistics publications. 

We would like to recognise the work at ONS on Housing Affordability statistics, with the team looking to provide further valuable insight by publishing additional measures such Purchase Affordability along the lines of their existing output on Private rental affordability in England. 

We are delighted to hear of GSS plans related to housing and energy – a very topical issue at the moment – starting with looking at the energy efficiency of housing at a subnational level. This is in line with the GSS subnational data strategy ambition to produce and disseminate increasingly granular statistics. This also highlights further collaborative work and sharing knowledge and priorities, involving people from across the GSS.

Looking forward

As we head into spring, looking forward to what the rest of this year will bring, I am also looking forward to the exciting developments across housing and planning statistics which build on the hard work of GSS statisticians to improve the value of housing and planning statistics across the UK. For example, since the publication of the work plan ONS has published a research output in March looking at the feasibility of using administrative data to provide information on property types in England and Wales as an alternative to census data. We also look forward to seeing the GSS publish further updates on its progress throughout the year and an updated plan of priorities for 2022/23. 

Acting against the misuse of statistics is an international challenge

Acting against the misuse of statistics is an international challenge

That was the message of the event on 14 March hosted by the UK Statistics Authority on the prevention of misuse of statistics, which formed part of a wider campaign to celebrate the 30th anniversary of the UN fundamental principles.

My fellow panellists, Dominik Rozcrut and Steven Vale, and I discussed a range of topics, from addressing statistical literacy to regulation best practice, from memorable examples of misuse, to the cultural differences that affect public trust internationally. Although we all had different experiences and approaches, it was clear that there was a common passion for the truth and statistics that serve the public good.

The event had all the merits of on-line meetings that we’ve all become familiar with: lots of people able to join from a wide range of locations and lots of opportunities for people to engage using live chat functions.

Perhaps some people find it less intimidating to type a question into an app than to raise their hand in a crowded room, because there were lots of interesting questions asked and it was clear that the issue of preventing misuse of statistics generated a lot of interest and passion from the audience as well as the panellists

But the event also brought with it a new kind of frustration to me as a speaker: there were too many questions to answer in the time available and I felt bad that we couldn’t answer all the questions that people typed in.

So, in an attempt to rectify that, I’ve decided to use this blog to address the questions that were directly for me that I didn’t answer in real time, and those which touched on the key themes that came across during the Q&A.


“Who are the best allies in preventing misuse or building statistical literacy outside of stats offices? Are there any surprising allies?”

There are obvious allies for us, like Full Fact and the Royal Statistical Society.

I also like to give a shout out to the work of Sense about Science. Their work highlights that there is a huge amount of interest in evidence, data and statistics – and that a simple model of experts versus “the public” is far too simplistic.

There are a huge range of people who engage with evidence: teachers, community groups, people who represent patients, and a lot of others. These people, who want to find out the best evidence for their community, are fantastic allies.

And I’d also pick out a surprising ally: politicians. In our experience, politicians almost always are motivated to get it right, and not to misuse statistics, and they understand we are making the interventions we are making. So perhaps they are the ally that would most surprise people who look at our work.

“How important is statistical literacy among the media and general public in helping prevent the misuse of statistics?”

I think that having a sort of critical thinking skill is important. People should feel confident in the statistics that are published, but also feel confident that they know where to find the answers to any questions they have about them.

But equally, we need statistical producers to be better in how they communicate things like uncertainty, in a way that is meaningful for the public.

So rather than putting the responsibility of understanding solely on the user, and just talking about statistical literacy, let’s also talk about producers’ understanding – or literacy if you will – about public communication.

“You have mentioned that sometimes the stats get misinterpreted because of the way they are presented – can you share some examples?”

My point here was that misinterpretation is a consequence of what producers of statistics do. One example we’ve seen frequently during the pandemic concerns data on the impact of vaccines. It’s been the case that sometimes people have picked out individual numbers produced by public health bodies and highlighted them to argue their case about vaccines. Producers need to be alive to this risk and be more willing to caveat or re-present data to avoid this kind of misinterpretation.

“What are your views on framing statistics for example 5% mortality rate vs 95% survival rate? Both are correct but could be interpreted very differently.”

I find it impossible to answer this question without context, sorry! I definitely wouldn’t say that, as an absolute rule, one is right and the other is wrong. It depends on the question the statistician is seeking to inform. I can’t be more specific than that in this instance.

However, to avoid possible misinterpretation, we always recommend that producers use simple presentation, with clear communication about what the numbers do and do not say.

“How do we balance freedom of expression with the need to prevent the abuse and misuse of statistics?”

We don’t ban or prohibit people from using statistics, so in that sense there’s no barrier to freedom of expression. But we do want to protect the appropriate interpretation of statistics – so our interventions are always focused on clarifying what the statistics do and don’t say, and asking others to recognise and respect this. It’s certainly not about constraining anyone’s ability to express themselves.

“What’s the most damaging example of misuse of statistics that you’ve come across in your career?”

Here I don’t want to give a single example but give a type of misuse which really frustrates us. It’s when single figures are used as a piece of number theatre, but the underlying dataset from which the single figure is drawn are not available, so it’s not possible for the public to get to understand what sits behind the apparently impressive number. It happens a lot, and we are running a campaign, which we call Intelligent Transparency, to address it.

“Can you give us some more insight into how you steer clear of politics, media frenzies, and personalities?”

We always seek to make our intervention about clarifying the statistics, not about the arguments or policy debates that the statistics relate to. So we step in and say, “this is what the statistics actually say” and then we step out. And we don’t tour the news studios trying to get a big name for ourselves. It’s not our job to get media attention. We want the attention to be on what the statistics actually say.


I hope these answers are helpful, and add some context to the work we do to challenge the misuse of statistics. I also hope everyone reading this is going to follow the next series of events on the UN Fundamental Principles of Official Statistics.

The next round of events is moving on from preventing misuse, to focusing on the importance of using appropriate sources for statistics. Find out more about them on the UNECE website.

Time for a change: amending the Code of Practice to allow alternative release times for official statistics

On 5 May 2022, we’ll be making our first change to our Code of Practice for Statistics since its major overhaul in 2018. We’ll be changing two practices, which relate to the release times of official statistics, giving producers the opportunity to release statistics at times other 9:30am, where they and we judge that this clearly serves the public good. These changes are not being undertaken lightly: this blog explains how we have reached our decision and what this change will mean for producers and statistics in the future.   

The coronavirus pandemic underscored the importance of ensuring statistics are shared in ways that reassure users about their integrity and independence: a big part of this is ensuring equality of access and that statistics are available to everyone at the same time. Since its inception, our Code has stated that official statistics should be released to all users at 9:30am, on a weekday (practice T3.6). However, during the pandemic the Director General for Regulation granted several exemptions for release at times other than 9.30am – to key economic statistics and to some COVID-19 related statistics. 

These decisions led us to review two release practices in the Orderly Release principle of the Code of Practice for Statistics. As part of this, we ran a public consultation between September and December 2021 to consider whether greater flexibility in release arrangements should be formalised within the Code of Practice for Statistics. Today we have published the findings from our consultation, which have helped inform our decision to revise these two practices. 

For most official statistics, producers, supported by several other stakeholders, told us they believe a standard release time of 9.30am best serves the public good: it enables release in an open and transparent manner, gives consistency across the GSS and protects against political interference. However, producers also said that there are some situations in which having an alternative release time can be beneficial to the public good. They would like to be able to nominate an alternative time where they see evidence of the benefit. For example, this could be where users can be better reached through targeting a statistical release that can then be promoted through the media. Enabling statisticians to speak directly to journalists and the public about their statistics has been a powerful means to ensure the statistics are well used during the pandemic.  

Considering producer views and our consultation findings, we are amending principle T3.6 so that an alternative release time for statistics can be requested where the head of profession (Statistics HOP) believes that it will enable the statistics to better serve the public good. To ensure that producers remain orderly and transparent about release arrangements and can maintain statistical independence, this alternative time request will still need to be approved by the Director General for Regulation. All approvals will be listed in our new alternative release times table. The alternative standard time is to be used consistently by the producer for those statistics and should be announced in advance.   

We have outlined our expectations of statistics producers seeking alternative release times in our explainer of the public good test. We highlight that considering the public good means understanding the wider nature of the use of the statistics and how the public access statistics – whether direct from producers, via the media or other means – and how the release of statistics supports both debate and decision making. Essentially, it means that producers should consider the way their statistics inform and benefit society. Being open about key decisions around release arrangements is central to maintaining the confidence of users in official statistics. To this end we are asking the Office for National Statistics to engage with its users and other stakeholders, to explain its release approach for several key economic statistics that it already publishes at 7am, and how it is balancing competing demands. 

As ever for OSR, a fundamental question driving this review has been, “how can we best serve the public good?” Giving producers the opportunity to decide on alternative release times, based on their own assessment of how their statistics best serve the public good, helps us meet this ambition, which is central to both our vision as a regulator and to the Government Statistical Service’s 2020-2025 strategy. 

Please see our new policy on alternative release times for further information. You can also read more about the findings from the consultation. The changes to the Code will come into effect from 5 May 2022. If you have any questions, please contact us via regulation.support@statsitics.gov.uk.

QCovid® case study: Lessons in commanding public confidence in models

Methods expert, Jo Mulligan gives an insight into the lessons learned from using the QCovid® risk calculator in commanding public confidence in models

I re-joined OSR around 15 months ago in a newly created regulator role as a methods ‘expert’ (I struggle with the use of the word ‘expert’ as how could anyone be an expert in all statistical methods? – answers on a postcard please). Anyway, with my methods hat on, myself and several colleagues have been testing the lessons that came out of our review of the statistical models designed to award grades in 2020. That review looked at the approach taken to developing statistical models to award grades in the absence of exams that were cancelled because of the pandemic. Through this work, OSR established key factors that impacted on public confidence and identified these lessons to be useful for those developing models and algorithms in the future.

Applying our lessons learnt to QCovid®

We wanted to see if the lessons learnt from our review of the grading models in 2020 could be applied in a different context, to a different sort of algorithm, and test whether the framework stood up to scrutiny. We chose another model to carry out the testing, the QCovid® risk calculator, also developed in response to the pandemic.

In 2020, the Chief Medical Officer of England commissioned the development of a predictive risk model for COVID-19. A collaborative approach was taken, involving members from the Department of Health and Social Care (DHSC), NHS Digital, NHS England, the Office for National Statistics, Public Health England, University of Oxford plus researchers from other UK Universities, NERVTAG, Oxford University Innovations, and the Winton Centre for Risk and Evidence Communication. It was a UK wide approach agreed and including academics from Wales, Scotland and Northern Ireland. 

The original QCovid® model that we reviewed calculates an individual’s combined risk of catching COVID-19 and dying from it, allowing for the inclusion of various risk factors. The QCovid® risk prediction model calculates both the absolute risk and the relative risk of catching and dying from COVID-19. The QCovid® model also calculates the risk of catching COVID-19 and being hospitalised but these results were not used in the Population Risk Assessment.

What is an absolute risk?: This is the risk to an individual based on what happened to other people with the same risk factors who caught COVID-19 and died as a result.

What is a relative risk?: This is the risk of COVID-19 to an individual compared to someone of the same age and sex but without the other risk factors.

The academic team, led by the University of Oxford, developed the model using the health records of over eight million people. It identified certain factors, such as age, sex, BMI, ethnicity and existing medical conditions, that affected the risk of being hospitalised or dying from COVID-19. The team then tested the model to check its performance using the anonymised patient data of over two million other people. Having identified these risk factors, NHS Digital applied the model to medical records of NHS patients in England and those identified as being at an increased risk of dying from COVID-19 were added to the Shielded Patient List (SPL).

This approach was ground-breaking as there was no precedent for applying a model to patient records to identify individuals at risk on such a scale. Before the development of QCovid®, the SPL had been based on a nationally defined set of clinical conditions and local clinician additions. People added to the SPL through application of the QCovid® model were prioritised for vaccination and sent a detailed letter by DHSC advising them that they may want to shield.

The QCovid® model was peer-reviewed and externally validated by trusted, statistical bodies such as the ONS and the results and the QCovid® model code were published. 

What we found from reviewing QCovid®

In testing the lessons from the review of the grading models in 2020, we found that some lessons were not as relevant for QCovid®. For example, the lesson about the need for being clear and transparent on how individuals could appeal any decisions that the algorithm might have automatically made was less relevant in this review. This is because, although individuals were added to the SPL through the model, shielding was advisory only, and individuals (or GPs on their behalf) could remove themselves from the list. Finding lessons that were less relevant in a different context is to be expected as every algorithm or model will differ in its development, application, and outcomes.

As part of this review, we did identify one additional lesson. This concerned how often the underlying data should be refreshed to remain valid in the context of the algorithm’s use and appropriateness, especially if the algorithm is used at different points in time. This was not relevant for the review of grading models as they were only intended to be used once. However, in a different situation, such as the pandemic, where new information is being discovered all the time, this was an important lesson.

What do we plan next?

We found that the framework developed for the review of grading models proved to be a useful tool in helping to judge whether the QCovid® model was likely to command public confidence. It provided assurance about the use of the model and stood up well under scrutiny. Additionally, working on this review has helped us to understand more about QCovid® itself and the work behind it. QCovid® provides a great example that models and algorithms can command public confidence when the principles of Trustworthiness, Quality and Value (TQV) are considered and applied. In terms of how we will use these findings going forward, we have updated our algorithm review framework and this example will feed into the wider OSR work on Guidance for Models as it continues to be developed this year. 

I really hope this work will be useful when we come across other algorithms that have been used to produce statistics and also that when we incorporate it into our Guidance for Models that others will benefit more directly too. So, this concludes my first blog in my Methods role at OSR, and in fact, my first blog ever!

Guest blog: Improving reporting and reducing misuse of ethnicity statistics

Richard Laux, Deputy Director, Data and Analysis, at the Equality Hub discusses his team’s work in improving reporting and reducing the misuse of ethnicity statistics in our latest guest blog, as part of the 30th anniversary of the United Nations’ Fundamental Principles of Official Statistics.

In my role as the Head of Analysis for the Cabinet Office’s Equality Hub I am in the privileged position of leading the team that analyses disparities in outcomes between different ethnic groups in the UK.

The reasons for disparities between ethnic groups are complex, and include factors such as history, relative levels of deprivation, the different age profile of some ethnic groups as well as many other factors. Despite the complexity of the issues, my team and I do all we can to prevent misuse of the data and help ensure that robust and clearly explained data are furthering the debate on race and ethnicity, which is an emotive topic for many people in this country.

My team’s responsibility for this is firmly rooted in the UN Principle 4 of preventing the misuse of statistics. We do this in a number of ways that align with this principle.

One way we do this is through bringing several analyses together to paint a broad-based picture of a topic of interest. For example, when supporting the Minister of State for Equalities on her reports on progress to address COVID-19 health inequalities we synthesised a large body of research describing the impact of the pandemic on ethnic minority groups. Much of this work involved my team reconciling and reporting on different sources and drawing robust conclusions from different analyses that didn’t always entirely agree.

A second way we try to prevent misuse of data is through the clear presentation of statistics, an example being Ethnicity facts and figures. This website was launched in October 2017 and since then it has been a vital resource to inform the debate about ethnicity in the UK. It gathers together government data about the different experiences of the UK’s ethnic groups and is built around well-established principles, standards and practices for working with data like the Code of Practice for Statistics.

We try to make the content on the website clear and meaningful for people who are not experts in statistics and data. It also contains detailed background information about how each item of data was collected and analysed to help those users with more interest or expertise in statistics draw appropriate conclusions.

The Commission on Race and Ethnic Disparities report recommended that RDU lead work to further improve both the understanding of ethnicity data and the responsible reporting of it (and thereby helping to prevent its misuse). As part of this work, we will consult on how to improve the Ethnicity facts and figures website, including whether we increase the amount of analysis on the site to help users better understand disparities between ethnic groups. Some of this might be in a similar vein to Office for National Statistics (ONS) work during the pandemic on ethnic contrasts in deaths involving the COVID-19. This modelling work showed that location, measures of disadvantage, occupation, living arrangements, pre-existing health conditions and vaccination status accounted for a large proportion of the excess rate of death involving COVID-19 in most ethnic minority groups.

Of course, there can be some difficulties with data that might lead to its misuse: datasets can vary greatly in size, consistency and quality. There are many different ways that ethnicity is classified in the datasets on Ethnicity facts and figures, and these classifications can differ widely depending on how and when the data was collected. For example, people might erroneously compare the outcomes for an ethnic group over time thinking it has remained the same whereas in fact it has changed; this might happen if someone is looking at data for the Chinese, Asian or Other groups over a long time period, as the Chinese group was combined into the ‘Other’ ethnic group in the 2001 version of the aggregated ethnic groups, but combined into the Asian group in the 2011 version of the aggregated ethnic groups in England and Wales.

We also try to minimise misuse and misinterpretation by promoting the use of established concepts and methods including information on the quality of ethnicity data. Our quality improvement plan and significant contribution to the ONS implementation plan in response to the Inclusive Data Taskforce set out our ambitions for improving the quality of ethnicity data across government. We will also be taking forward the Commission for Race and Ethnic Disparity’s recommendation that RDU should work with the ONS and the OSR to develop and publish a set of ethnicity data standards to improve the quality of reporting on ethnicity data. We will consult on these standards later this year.

Finally, we raise awareness and knowledge of ethnicity data issues through our ongoing series of published Methods and Quality Reports and blogs. For example, one of these reports described how the overall relative stop and search disparity between black people and white people in England and Wales can be misleading if geographical differences are not taken into account.

We have significant and ambitious programmes of analysis and data quality work outlined for the future. I would be grateful for any views on how we might further help our users in interpreting ethnicity data and preventing misuse.