OSR in the pandemic and beyond: Our year so far

The first half of 2021 has seen further lockdowns, an impressive vaccination rollout and as we ease into the Summer, some easing of restrictions across the UK. It’s also been a busy time for us, as we continue to push for the production of official statistics, and other forms of data, that serve the public good. We really feel that public good matters more than ever.

We recently published our business plan for 2021/22 in which we outline our focus for the statistical system over the coming year and how it can consolidate the huge gains made in data collection and publication. We have also made progress we have made on our role in data. Our review and findings for developing statistical models to award 2020 exam results may well be the most comprehensive review of the 2020 exam story. It’s comprehensive in two senses. It covers all four parts of the UK, unlike other reviews. And it goes beyond technical issues about algorithms to identify lessons for all public bodies that want to use statistical models in a way that supports public confidence. We have also published an insightful review on Reproducible Analytical Pipelines and our research programme.

The use of statistics during the pandemic

Statistics have and will continue to play an important and extremely visible role in all our lives. I recently provided evidence to the inquiry run by the House of Commons Public Administration and Constitutional Affairs Committee on the use of data during the pandemic. Since the start of the pandemic, Governments across the UK have maintained a flow of data which has been quite remarkable. We continue to push for further progress, for example on vaccination data.

Statistical Leadership

One thing that the pandemic has highlighted is how important it is for leaders to be analytical, and this is something that our Statistical Leadership report published recently highlights.

Good analytical leadership will be crucial to answering the many questions that have arisen over the course of the pandemic and continue to come to light including the importance of transparency. We are currently planning an in-depth discussion on these issues and more for our second OSR annual conference, which we aim to host later this year, focusing on high quality data, statistics and evidence.

Looking Forward

There are lots of good things happening for statistics at present. I was delighted to see changes to pre-release access in Scotland because equality of access to official statistics is a fundamental principle of statistical good practice.

I am also really looking forward to announcing the results of our 2021 annual award for Statistical Excellence in Trustworthiness, Quality and Value in July. This is the second year we have worked in partnership with the Royal Statistical Society to offer the award.

Keep up to date with our latest work and news by following us on Twitter, and sign up to our monthly newsletter.

The productivity puzzle: Looking at how productivity in the UK is measured

This is a guest blog from Stuart McIntyre, Head of Research at the Fraser of Allander Institute.

Productivity is a term that economists intuitively understand, but it can often be difficult for non-specialists to track and understand productivity changes.

Post the global financial crisis of 2008/09 (GFC), productivity in the UK has both underperformed relative to other developed economies and underperformed historic growth in productivity in the UK. This has come to be known as the ‘productivity puzzle’.

Why has UK productivity performance seemingly slumped? And why should people care?

UK productivity performance might seem like it doesn’t matter to the average worker – but this would be to misunderstand what it represents.

Improvements in productivity, producing more with the same or fewer inputs like hours worked, is a key element in unlocking improvements in wages for workers and competitiveness for businesses.

As the Nobel Prize winner Paul Krugman once said, “Productivity isn’t everything, but, in the long run, it is almost everything. A country’s ability to improve its standard of living over time depends almost entirely on its ability to raise its output per worker”. Indeed, one of the key reasons why living standards in the UK have stagnated over the last few years has been the weak performance of UK productivity.

Productivity matters, and for people trying to understand how the economy is performing productivity is a key indicator, and not just as a number, but also in comparison to other parts of the country and indeed other countries.

How is productivity measured in the UK?

The Office for National Statistics produce regular updates on UK productivity. This includes measures of labour productivity, output per hour worked and per job, as well as multi-factor productivity– which measures the change in economic output that cannot be explained by changes in inputs like labour and capital.

One challenge remains that many people take labour productivity measures to be the same as overall productivity, which ignores broader measures of productivity. In particular, our understanding of changes in productivity require us to look beyond what is happening to labour productivity, and look at whether businesses are investing. We also need to consider changes in innovation, regulation, and the business environment.

Since the GFC we’ve seen significant improvements in the quality and range of productivity statistics, and also significant investment by researchers inside and outside of government to understand why the UK’s productivity performance has been so poor.

Even among experts, there are competing assessments of the root cause and it seems like a number of factors might be important. Explanations range from weak investment in skills and training, as well as in research and development, through to factors causing weak demand in the economy. Another interesting explanation centres on whether part of the explanation lies in how we measure productivity.

All of which invites significant investment in our economic data – and since the GFC we have seen a lot of this take place.

We now have more and better productivity statistics in the UK. For example, we know much more about how productivity differs between firms, between regions, and between the UK and its international competitors.

How could we improve how productivity is measured?

We’ve seen some important measurement issues identified in the academic literature being addressed, as well as the testing of key ideas from the academic research about the role of management practice in explaining productivity performance, through the collection of new survey data from businesses.

Yet there remains lots still to do.

We’ve seen increasing use of so—called administrative data, like VAT returns, to measure what is happening in the economy in a timelier manner. These data developments are vital – and have an important role to play in measuring productivity.

At the same time, we know how important measurement issues can be in tracking productivity in particular parts of the economy – this is increasingly important as the economy undergoes significant change with digitalisation and growth of new technologies. It must remain a key area of focus.

One area of economic statistics where there is increasing investment and focus is sub-UK statistics – as the UK Government embarks upon its ‘Levelling Up’ agenda this will be increasingly important. In particular understanding what is happening in the regions and nations of the UK. This is an area where productivity statistics could also use further investment.

In particular, providing more timely estimates of regional productivity, but also by ensuring that analysis and statistics at a UK level are paralleled at a sub-national level. This also goes for measurement of factors that are important in explaining movements in productivity, for example what is happening to capital investment.

There is also scope to help non-expert users engage with these statistics. The past few years have seen growing use of infographics and social media – which has helped the latest data reach a broader audience.

More generally, we’ve seen increasing focus on setting out the uncertainty around economic statistics. It is clear from recent evidence that users of economic statistics benefit – in the form of improved understanding of what the statistics show – from having the uncertainty inherent in economic statistics explained to them. In my view, this is an area where the productivity statistics could also be enhanced.

In summary, the post GFC period has seen renewed interest and investment in understanding UK productivity. In particular investing in more local statistics, making greater use of administrative data, and in the communicating and explanation of these statistics to non-expert users. We’ve come a long way, but there remains much still to do.

Making an impact: How can statisticians help journalists to answer key questions with data?

As part of our work on statistical leadership, we are hosting a series of guest blogs. This blog is from Robert Cuffe, Head of Statistics at the BBC.

It’s not all rush, rush, rush in news. One colleague, helping me with my transition from the months-away-deadlines of clinical research to the five-minute-deadlines of breaking news, revealed that when he worked on the evening TV news he had often had the luxury to go away and “just think about a story for ages, like, easily 15 minutes”. It didn’t help much.

Does it need to be so frenetic? And, if so, what does that mean for departments or statisticians trying to get their data covered broadly and correctly?

We can debate about five minutes versus fifteen, but the news machine does need to work very quickly. Editors are the last line of defence between everything that happened in the world recently and your news feed.

Every select committee, every ministerial pronouncement or opposition line, every report from a charity, think tank or government statistician, every freak goal from any team anywhere in the world, every man who bit a dog, every new collection, every sleb indiscretion and for quite some time almost every major number about the pandemic gets reviewed and someone has to decide “does this make”?

If it does, do we need to send a video crew to capture footage and interview bystanders, how quickly can we get 400 words up online to summarise this, who are good voices to put this into context, does the final script or writeup need to be run by the duty lawyer? And so on.

And, if not, what else can we put in the paper or in the bulletin? Because the program is happening at 6pm (or the print run starts at 7pm) whether or not we’re ready.

So decisions have to be made pretty quickly. And fast, good decisions about data need a prominent, clear and succinct summary of what the data can or can’t say.

As statisticians, we understand better than anyone else what the data can say. But our deep understanding of and intuition for the principles of our specialism do not always help with the prominent, simple summary. There are professionals who specialise in that translation: journalists or communications officers.

Here are some good guidelines given to me by an excellent journalist, who wanted me to help editors understand (in reverse order)

  • What are the caveats?
  • How prominently do we need to alert the reader/listener/viewer to them?
  • Can we run the story with this top line (or a different top line that the caveats don’t undermine)?

I’ve learnt to choose my caveats carefully: pushing hard on some and forgetting completely about others. Almost every number comes with a margin of error, but not every number story needs to include a confidence interval at the top. If the CI contains the null, why run the story? And if the bottom of the CI is miles away from the null, who cares that the truth could be a teeny bit higher or lower?

Equally, getting a key caveat, say “the time series jumps suddenly because the data definitions have changed” on page 28 won’t do when decisions or press releases are based on the executive summary.

So getting in the room for the painful discussion of final sign-off on a report or a press release, when much of the meeting feels like it’s about where the semi-colon goes, can be the most valuable contribution a statistician can make to the public understanding of their data. The day before the rush, there’s more time for discussion. Prepping the ground by explaining in advance to journalists or trusted intermediaries which questions the data can and can’t answer (rather than talking about specifically what answers the data contain) is also enormously valuable. Communication should be right first time. Statisticians can help make that happen.

PS –Delivering data during the pandemic is no mean feat. Trying out improvements to the service or data collection, as many have done, is remarkable. Thank you to all those producing statistics for all the work you have done in what has been an awful year for so many.


How we won our Award for Statistical Excellence

As the deadline for applications for our Statistical Excellence award nears, the joint-winners of the 2020 award, the Scottish Fiscal Commission, have written a guest blog about how they won and what the award means to them. Apply for the 2021 Award for Statistical Excellence in Trustworthiness, Quality and Value by 30 April 2021.

What is the Scottish Fiscal Commission?

We are Scotland’s independent fiscal institution (IFI). The Commission was created to serve the needs of increased fiscal devolution in Scotland. We are a young organisation, established in 2017. Our official economic and fiscal forecasts are used by the Scottish Government to formulate the Budget, the Scottish Parliament to scrutinise the Budget, and stakeholders and the media to inform public debate. We are not considered an official statistics producer, but we made an active choice to apply the Code of Practice for Statistics wherever possible.

The Code and the award

Since our creation, we followed the majority of the Code of Practice for Statistics principles. But in March 2018, we published a statement setting out our ongoing commitment to voluntarily apply the Code.

Last year, following encouragement from former team members, Mairi Spowage and Laura Pollitt, we decided to apply for the award. It was the first year the category had been created so we had no template to go on. Although the Commission is not an official source of statistics, we’re committed to upholding the pillars of the code – trustworthiness, quality and value – so it was just a question of evidencing how we do this.

We outlined our processes which include routinely pre-announcing our publications; helping data providers to plan their work by publishing an annual Statement of Data Needs, ensuring we accompany our publications with underlying data so others can use them and by communicating our work as widely as possible via our website, social media and events.

We also outlined our success in developing a reputation for delivering independent and credible forecasts, for which we were recognised by the OECD in an independent review published in 2019.

Silvia Palombi from the SFC holding the awardWe were delighted to receive the award. We know that our stakeholders and ultimately the public depend on us to uphold the highest standards of statistical practice as a matter of course, so we were grateful to the Royal Statistical Society and the Office for Statistics Regulation for the opportunity to have our work endorsed in this way.

Any civil service organisation voluntarily applying the Code of Practice for Statistics to their work can apply for this year’s award. You can see the full list of organisations who have committed to apply the Code and more information about applying the Code on the Code of Practice website.


2021 Award

Apply for the 2021 Award for Statistical Excellence in Trustworthiness, Quality and Value by 30 April 2021.

Securing public confidence in algorithms – lessons from the 2020 exam awards

Last March, schools and colleges across the UK were closed and the qualification regulators (Ofqual, Qualification Wales, SQA and CCEA) were directed by the respective governments of England, Wales, Scotland and Northern Ireland to oversee the development of an approach to awarding grades in the absence of exams. While each regulator developed a different approach, all approaches involved statistical models or algorithms.

When grades were awarded to students last summer there was intense media coverage across the UK. We read headlines such as, “Futures sacrificed for the sake of statistics” and statements implying that a “mutant algorithm” was to blame. The decisions that were made based on the calculated grades had a significant impact on many children’s lives. Up until last summer, most people had probably never felt personally affected by a government algorithm before.

Statistical models and algorithms though are increasingly becoming a part of normal life. As technology and the availability of data increases, developing these types of models in the public sector can play a significant role in improving services for society.

We are concerned that public confidence in the use of statistical models has been damaged by the exam processes last year. As Ed Humpherson, our Director General, said recently, “Public debate about algorithms veers between blind optimism in their potential and complete rejection. Both extremes have been very evident, at different times, in the discussion of the use of models to award exam grades.”

Our review

In the post 2020 exams world, the risk of a public backlash when public sector decisions are supported with an algorithm feels much more real now – regardless of how ‘right’ the algorithm may be.

As the UK regulator of official statistics, it is our role to uphold public confidence in statistics. We believe that statistical models have an important role to play in supporting decision making and that they can also command public confidence. It is because of this that we reviewed the approaches taken last year to award grades, to get a better understanding of what happened and, most importantly, what others can learn from the experience. To us it was striking that, though the approaches and models in England, Wales, Scotland and Northern Ireland had similarities and differences, all four failed to command public confidence.

Our recently published report explores what we found. We acknowledge the unique situation that the regulators were working in which was far removed from their normal roles. We conclude that many of the decisions made supported public confidence. To support learning for others, we have also clearly highlighted in the report the areas where we feel different choices could have been made.

We found that achieving public confidence is about much more than just the technical design of the model. It is also not just about doing one or two key things really well. It stems from considering public confidence as part of an end-to-end process, from deciding to use a statistical model through to deploying it.  It is influenced by a number of factors including the confidence placed in models, the extent of public understanding of the models, the limitations of the models and the process they were replacing, and the approach to quality assuring the results at an individual level.

Lessons for those developing statistical models

We’ve identified 40 lessons for model developers which support public confidence. These fall under the following high-level principles:

  • Be open and trustworthy – ensure transparency about the aims of the model and the model itself (including limitations), be open to and act on feedback and ensure the use of the model is ethical and legal.
  • Be rigorous and ensure quality throughout – establish clear governance and accountability, involve the full range of subject matter and technical experts when developing the model and ensure the data and outputs of the model are fully quality assured.
  • Meet the need and provide public value – engage with commissioners of the model throughout, fully consider whether a model is the right approach, test acceptability of the model with all affected groups and be clear on the timing and grounds for appeal against decisions supported by the model.

Lessons for policy makers who commission statistical models

Model developers should not work in isolation. They should collaborate with others throughout the end-to-end development process. It is important that policy makers who commission models are part of this wider collaboration.

A statistical model might not always be the best approach to meet your need. Model commissioners need to be clear what the model is aiming to achieve and whether it meets the intended need, understanding the model’s strengths and limitations and being open to alternative approaches.

Statistical models used to support decisions are more than just automated processes. All models are built on a set of assumptions. Commissioners of models should ensure that they understand these and provide advice on the acceptability of the assumptions and other key decisions made in the model development.

The development of a statistical model should be regarded as more than just a technical exercise.  Commissioners need to work with model developers throughout the end to end process and have regular reviews to check the model will meet the policy objective.

Lessons and recommendations for the centre of government

We also looked at the big picture, at the wider infrastructure that is in place to support public bodies working with statistical models. Looking at this, we found that public bodies developing models need more guidance and support and that this should be easier to access.

There are lots of different organisations in the statistical modelling, algorithm and AI space. As a result, it is not always clear what guidance is relevant to whom and where public bodies can go to for support. Some of this is down to inconsistencies in the terminology used to describe models, and some is due to it simply being hard to find out who is doing what in this very fast-moving area.

At the highest level, we feel that clearer leadership is needed from government. We are calling on the Analysis Function and Digital Function, working with the administrations in Scotland, Wales and Northern Ireland, to ensure that they provide consistent and joined up leadership on the use of models.

To support this, we recommend that those working in this area collaborate and develop a central directory of guidance. We see the Centre for Data Ethics and Innovation having a key role in this space. In addition, we are recommending that more guidance is developed, particularly for those wanting to test the public acceptability of their models.

And lastly, we recommend that any public body developing advanced statistical models with a high public value should consult the National Statistician for advice and guidance.

Our role going forward

This review supports the work of our Automation and Technology work program. Through this programme we will be clarifying our regulatory role when statistical models and algorithms are used by public bodies. We are working on guidance about models and AI in the context of the Code of Practice for Statistics and advocating for the use of automation within statistical production. We will continue to work with other organisations to support the implementation of the review findings.

We are happy to discuss the findings of our review further.  Please get in touch with us.

Listen to your enthusiasts: Implementing RAP at Public Health Scotland

This is a guest blog from Scott Heald following the launch of our new report: Reproducible Analytical Pipelines (RAP): Overcoming barriers to adoption.

Firstly, let me introduce myself. I’m Scott, the Head of Profession for Statistics at Public Health Scotland (PHS). PHS is a new body, formed in April 2020 (to lead on tackling the challenges of Scotland’s public health but dominated by the COVID-19 pandemic in our first year). Our RAP journey started when we were in NHS Scotland’s Information Services Division (ISD), the health and care statistics body which now forms part of PHS.

PHS, and ISD before it, has been a big fan of RAP from the beginning. I wanted to share our story, from a bunch of enthusiastic statisticians who convinced me it was the right thing to do (I didn’t need much convincing!), to embedding it within our organisation in our reporting on COVID-19.

Our RAP journey began with a programme of work to transform how we published our statistics.  It quickly became clear that the programme had to be as much about our processes for producing statistics, not just the final published output. More automation was key – to speed up processes, eliminate manual errors, and to release capacity to add value to our statistics. Greater job satisfaction for our statisticians was a welcome impact too.

“We don’t use that here”

Up till this point our software of choice was propriety standards. More and more graduates were joining our organisation and wondering why we weren’t using open-source software like R, having been taught it at university. I guess, in those early days, I was probably part of the “we don’t use that here” (partly out of fear as I had not personally used any of the new software they were talking about).

However, I was persuaded (and willing) for a group of our statisticians to show us what could be done using R. Long story, cut short, is that our group of enthusiasts showed the power of what could be done and PHS is now making the strategic shift to follow RAP principles as a way of working.

The art of persuasion

I’d describe our RAP journey as “bottom up”, with support from the “top down”. When we started seeing the results, we didn’t need much convincing. OSR’s report showcases our work on Hospital Standardised Mortality Ratios – a quarterly publication which used to take five days to run (lots of detached processes, lots of room for error). I remember vividly the first time the team ran it the RAP way. Five minutes after the process started, the finished report was in my inbox. We couldn’t believe it! And, to be sure, spent the next five days running it the old way to make sure we got the same answers (we more or less did; the RAP way was more accurate, highlighting a few errors we fixed along the way!).

Our learning is that it’s a relatively easy shift for more recent graduates because they already know R. The focus for our training had to be on members of staff who have been with us for longer and weren’t familiar with it. And that can take some persuading – team leaders finding themselves managing teams who are using software they have never used themselves. We had to support and train at all levels (with tasters for managers who themselves may not need to delve into the finer details of R, but know they would know how to do it if they were doing their time again).

So, what have we learnt?

  1. Be prepared to try new ways of working
  2. Listen to your staff, who have a different perspective and fresh take on ways of working
  3. Start small – it’s easy to make the case when you can showcase the benefits of the RAP approach
  4. RAP improves the quality of what we do and eliminates errors
  5. Be prepared to invest in training – and recognise your staff will be in different places
  6. Use buddies – our central transformation team certainly helped with that, creating capacity in teams to RAP their processes
  7. Be open and share your code – we publish our code on GitHub, a great community to share ideas and approaches

Listen to your enthusiasts

OSR’s report highlights that we have a small central transformation team to support teams with their RAP work. This is crucially important as the initial work to RAP your processes can take time, so the additional capacity to support our teams to enable this to happen is a must. This initial investment is worth it for the longer-term gains. It’s not all about making efficiencies either. It’s about streamlining processes, reducing error, and giving our analysts job satisfaction. They are now able to add more value because they have more time to delve into the data and help users understand what the data are telling them.

Our focus on transforming the processes for our existing publications has stalled due to many of our staff being redirected to supporting PHS’s key role in Scotland’s response to the COVID-19 pandemic. However, a success of our RAP work is that many of our new processes required to produce our daily COVID-19 statistics are done using RAP principles – they had to be as we’re producing more statistics, more frequently than ever before. Our earlier use of RAP meant we were in a good place to apply the techniques to our new ways of working.

And my final bit of advice? Listen to your enthusiasts – I’m glad I did.

Transparency is fundamental to trust – the government must learn from mistakes during the pandemic

As part of our work on statistical leadership, we are hosting a series of guest blogs. This blog is from Will Moy, Chief Executive of Full Fact.


This time last year, the UK was about to head into what would be the first of three national lockdowns. Already, the news was dominated by the coronavirus pandemic, shining a spotlight on data that shows no sign of fading any time soon.

In the past year, in-depth coverage and debate around the merits of statistical releases have become commonplace, while the nation has watched numerous public briefings filled with data and statistics from politicians and scientists.

As an organisation dedicated to fighting bad information, Full Fact knows just how valuable good information, used responsibly and communicated well, is to public life and public debate. We support the OSR’s vision that statistics should serve the public good.

It has, therefore, been heartening to see the new prominence given to discussions about the importance of data, statistics and evidence during the pandemic, whether among members of the public, in newspaper columns or in public statements from the government.

At the same time, the government needs to work to maintain public confidence in both its decisions and the data it is using to make them, which is a key part of ensuring it can tackle the crisis: this shapes what people are willing to do and how long they are willing to do it for.

We have all been asked to trust that the government is acting in our best interests, as our political leaders point to data and scientific evidence as justification for decisions that have had real and significant impacts on the way we live our lives.

In doing so, the need for a clear commitment to transparency, accountability and honesty in public life could not have been stronger. However, this bar was not always met.

Ministers or officials too often referred to data or statistics in statements to the press, the public or Parliament without publishing the full data so the public and intermediaries could check for themselves. A failure to be transparent on the data and evidence informing the government risks undermining public confidence in fundamental parts of the government’s response.

As fact checkers, we rely on publicly available information to scrutinise such statements. We can and do question departments but time is of the essence when trying to challenge false or misleading claims. By and large, a fact check is at its most valuable when it is published soon after the claim has been made, reducing the time any inaccuracy has to spread, and coming when the media or public are still interested in that topic or debate.

For instance, we were unable to assess claims about the turnaround times of tests made by Prime Minister Boris Johnson on 3 June 2020 until the necessary data was published a month later, on 2 July, and raised this issue with the OSR. Full Fact also found it impossible to assess a government target set out in July 2020 – to “test 150,000 at-risk people without symptoms per day by September” – because the right data wasn’t available at the start of October.

We discuss these examples, and more, in our Full Fact Report 2021, which is based on our experience fact checking the pandemic. In it we make recommendations for action from the government, including that when government departments, ministers or officials refer to data or information when making statements to the public, the media or Parliament, the full data must be made publicly available.

Throughout the pandemic Full Fact has been pleased to work alongside the OSR to help secure publication of missing or unpublished data, as well as challenging misuse of information – and both organisations are committed to continuing this work.

But publication of information in a timely manner should not require our interventions. The government must make a consistent commitment to doing so as soon as is possible, and before being asked.

It is undeniable that there are still many changes – some cultural, some technical, and many long-term – that need to be made within the statistical system. This will require efforts from statisticians at all levels and across all departments, and the OSR’s report into Statistical Leadership proposes a comprehensive set of recommendations towards this.

Crucially, the report highlights the pivotal role of committed leaders inside and outside the Government Statistical Service in ensuring government and society get the most trustworthy, high quality, and high value statistics. The report is in itself important and welcome leadership from the OSR: the recommendations should be promoted and acted on with progress monitored and evaluated.

Full Fact strongly agrees that efforts from statistical leaders, and the wider system, must be equalled by political leaders. It is of critical importance they lead by example, whether that is ensuring that statistical thinking is seen as essential within the civil service, or respecting the principles set out in the Code of Practice for Statistics.

Demonstrating respect for these responsibilities is essential if the public is to have confidence in the production and use of statistics in public life. The government must learn lessons from the pandemic and prove it is worthy of the public’s trust.


Talking numbers and making them count

As a communications professional, I use insights and ideas to implement and deliver impactful communications. Using statistics is a great way to use evidence and explain complicated information, but comms people are not always well-known for their statistical literacy and this can sometimes cause problems.

During a busy day in the comms team, any use of numbers in a press release, tweet or presentation should align with the Code, ensuring messages are clear, measured and appropriately tell the story. It is essential that production and use of statistics by governments command confidence in the statistics and organisations using them and help those listening understand the key messages.

At the Office for Statistics Regulation we are interested in how numbers can be used powerfully and collectively across government, to convey important messages and information. Statistical leadership by government is essential to ensure the right data and analysis exist; to ensure they are used at the right time to inform decisions; and to ensure they are communicated clearly and transparently in a way which will support confidence in the data and decisions made on the basis of it.

Statistical leadership is not just about having good leadership of the statistics profession. While this is important, we want to make sure individuals inside and outside the statistics profession show leadership. This should happen right through from the most junior analysts producing statistics to the most senior Ministers quoting statistics in parliament and media. It is relevant to all professions including policy and communications specialists.

Communications teams should work in close partnership with their department’s analysts, to ensure that any use of statistics does not distract from your key communications messages, or itself become the story. The winning situation is using statistics in a helpful way, to convey the right impact, help tell the story, gain understanding and enhance the organisation’s reputation in the process.

The Code of Practice for Statistics and its principles and practices of ‘trustworthiness, quality and value’ provides an excellent guide to ensure this is done as effectively as possible, to ensure users can confidently make decisions about the statistics that are presented to them, using them without question to access what they require and need.

Statistics can really add to public debate as we have seen during the events of COVID-19, when the nation has used numbers to understand the pandemic and its impacts on society, the economy and wider. But it is essential that anyone using numbers and speaking on behalf of government can communicate statistics effectively, in a way that commands confidence and helps those listening understand the key messages. The simplest way to achieve these outcomes and empower your message is to ask the right questions about statistics before you use them. And, if you still feel unsure then find another way to evidence your point.

However, comms people don’t need to know the Code inside out and should always work closely with Heads of Profession for Statistics for advice, support on using numbers and understanding of guiding principles.

If you are interested in finding out more about using statistics, the Code or Statistics Leadership please get in touch with me or visit our website.

Here are some tips…

  • Does it look right? Is that an implausible number? If it’s unusual, it could be wrong… what’s behind the surprise?
  • What exactly are we measuring and why? Is the source reputable and did they show their working?
  • Where do the data come from? What is the backstory and always be curious about someone else’s data. What do we discover if we go another click?
  • Only compare the comparable. Watch out for changes in definitions and different ways to measure the same thing . What’s the historical trend?
  • Presentation is key. Good use of pictures and graphics help convey meaning and should never cause confusion or misrepresentation
  • Remember to ask your Head of Profession for statistics, or a statistician who has worked to produce the data, for advice on how best to present numbers in communications.

I didn’t think I’d ever be interested in population statistics, but then I came to my Census

Ok so that’s not quite true. I’ve been working with a focus on population and society statistics for the past three years here at the Office for Statistics Regulation (OSR) and I love it. Working on areas such as migration, culture, wellbeing… it is fascinating. These statistics form such an important role in people, government and decision makers understanding and planning for our society in this country.

But even with my work hat off for a second, this is such an exciting time for statistics as we near Census day in England, Wales and Northern Ireland and work continues to prepare for Scotland’s Census in 2022. The Census is so important and a unique source of data. It will be used by many different people for many different purposes. For me, I think Census data can be most valuable when it is used to support how people understand their local communities, whether it be by local councils, community groups or even school students.  I am so eager to play my part and fill in my Census return – online this time of course.

Work hat back on and I have been leading OSR’s assessment of the Censuses in the UK. Undertaking this assessment is the role OSR has when it comes to Census data and statistics. We aren’t collecting the returns or producing the data but we are working closely with the Census offices to ensure what they are doing, ultimately delivering Census data and statistics to the public, is in line with the Code of Practice for Statistics.

We collect evidence from Census offices, speak with users of Census data, make our judgements on compliance with the Code; reporting on this through our formal assessment reports and public correspondence with Census offices. That is the nuts and bolts of the assessment process. The reality of the assessment and OSR’s involvement is that we are continuously engaging with Census offices as they are developing and delivering their Census plans to support the best possible outcomes. We meet with Census offices regularly to discuss their work, share our views on their developments, and talk through how they have taken forward our findings. It has kept me busy since 2018 and will continue to do so until well after the data and statistics are published.

This ongoing conversation as part of the assessment is overlaid with a more formal reporting structure. We have completed phase 1 of the assessment and are kicking off phase 2 for England and Wales and Northern Ireland. For each phase of the assessment, the Office for National Statistics (ONS) and the Northern Ireland Statistics and Research Agency (NISRA) publish an accreditation report. These reports provide an update to Census users on how they consider the practices, processes and procedures for Census 2021 meet the standards of the Code. This provides OSR with evidence for our assessment and more importantly provides useful information on progress on the Census for all who are interested. You should definitely take a read!

We are always keen to hear from users of Census data and I have had some extremely valuable conversations with individuals and organisations to date – a big thank you if you have submitted feedback or if we have spoken in the past. As part of this second phase, we have opened up our user consultation once more . Your views and opinions are so important to help us understand what Census offices are doing well and how things could be improved. So please, do find out more and get in touch.

I hope you all are as excited as I am as we get closer to 21 March!

Next stop: National Statistics status

In 2020 we assessed estimates of station usage produced by the Office of Rail and Road (ORR) and designated them as National Statistics in December. In this blog, Lyndsey Melbourne, Head of Profession for Statistics at ORR, and Anna Price, the lead regulator for the assessment, talk about their experience and why assessments of official statistics are so valuable.

Where it all began…

Lyndsey: Most of our statistics were designated as National Statistics in 2012. In 2019 OSR carried out a compliance check and confirmed they continued to uphold the high standards expected. At the time we also discussed future assessments – in particular our most popular set of statistics, estimates of station usage, had never been assessed. These statistics provide unique information about each of the 2,500+ mainline rail stations in Great Britain. The granularity of data is one of the main reasons that these statistics are of interest to a very broad range to users: they are relevant to anyone no matter where they live. We were keen to further promote the quality and value of the statistics by gaining National Statistics status.

Anna: This assessment was a bit different to others. Usually we do our review, publish our findings and requirements, and then give producers a few months to meet these requirements. But when we first met with Lyndsey and Jay, the lead statistician for estimates of station usage, in April 2020 they told us they were keen to get National Statistics status in time for the next statistical release in December.

The assessment process

Anna: To support ORR to achieve this ambition, we adapted our usual process, for example sharing our findings and requirements as we developed them. This let the statisticians at ORR start on improvements while we worked on more complex findings and wrote our report, instead of waiting until the end. We had lots of meetings with the team during the project and were really impressed with the ideas they came up with each time we raised an area for improvement. I think the flexibility and enthusiasm of both teams was the reason that the project was so successful.

Lyndsey: Throughout the assessment, OSR were flexible and happy to work with us to agree timescales to fit in with our publication plans and around our day jobs. We were keen to work towards achieving National Statistics designation of the statistics in time for our next annual publication planned for December 2020. Otherwise, it would be up to 20 months before we could publish designated statistics!

OSR were very accommodating to this request and we worked closely during the following eight months to review and improve our statistics. OSR’s flexible approach allowed emerging requirements from their assessment to be addressed during the production process of the next set of statistics.

It’s fair to say that producing the annual publication at the same time as addressing OSR requirements was a challenge, but being able to confirm to users that our statistics had been successfully designated as National Statistics on publication day was very satisfying.

The value of this assessment

Lyndsey: During the assessment OSR spoke to a range of users and the feedback they obtained was extremely valuable. We have continued to speak to these users to understand their use of our statistics and how they could be improved further.

The improvement plan we developed to address OSR requirements and other feedback from users was a really useful tool for us. Sharing ideas and drafts with OSR along the way and getting their feedback was another valuable part of the process.  We published this improvement plan on the user engagement page of our data portal to keep users up to date on the changes we were making.

Anna: The users of these statistics are passionate about them. So it was a lot of fun to hear about how they use the statistics, what they like and what would make using them even better. Seeing the variety of people who use the statistics, for a variety of purposes, was really motivating – it made it even more satisfying when we saw changes to the statistics in December which met the user needs we had identified.

At OSR we like to champion good practice, as well as areas for improvement. So it was nice to highlight the great work that ORR were already doing on these statistics – like the Twitter Q&A that ORR host on publication day, this year accompanied by a launch video and a live YouTube Q&A. It’s great to see statisticians putting themselves out there to talk about their statistics directly with users.



To keep up to date with our latest work, you can follow us on Twitter and sign up to our monthly newsletter.