Insights into the use of official statistics in policy evaluation

Grace Pitkethly, Insights and Evaluation Manager at OSR writes about how the use of official statistics can improve the already essential tool of evaluation to ensure the effective functioning of government.

Evaluation is an essential tool to ensure the effective functioning of government. In the words of the Evaluation Task Force: “Government departments are expected to undertake comprehensive, robust and proportionate evaluation of their policy interventions in order to understand how government policies are working and to ensure the best value of public money”.  

In recent years there’s been good progress in setting up structures and providing guidance from the top-down to help departments conduct good quality policy evaluations. We fully support this at OSR – our Director General, Ed has written about how good evaluation supports (and is supported by) the Code of Practice for Statistics.  

At OSR, we also want to help from the bottom-up – enabling the people conducting evaluations to do this as effectively as possible using statistics and data that serve the public good. Supporting policy evaluation at different levels, whether cross-government, departmental, or team-level, helps enable efficient and good quality evaluations. One way that we can do this is supporting the use of official statistics in policy evaluations, focusing on the value of the statistics to support society’s need for information from evaluations. NAO guidance based on the Magenta Book says that existing data from administrative and monitoring systems, or large-scale, long-term surveys should be considered first as data sources. But is that actually the case? 

We carried out a quick exploration of how official statistics and their underlying datasets are currently used in evaluations and how OSR can support statistics producers to make their statistics more valuable for evaluations. Through our conversations, we heard about a variety monitoring and evaluation programmes which draw on official statistics, in spite of limiting our scope due to time and resource constraints. 

Crucially, we did not identify any evaluations which rely significantly on published official statistics alone. This doesn’t mean that examples don’t exist – but none were raised after speaking to five OSR regulators, individuals in eight policy departments involved in carrying out or enabling evaluation, and five other teams across ONS and Cabinet Office. 

We found that the most common way official statistics are used in evaluation is through the analysis and linkage of data which underpin official statistics. In some cases, the data which underpin official statistics are linked to, or analysed alongside, primary data collections designed specifically for the evaluation. This is to overcome barriers such as data gaps (where official statistics are not produced for all outcomes of interest) and granularity (where official statistics do not break down into the geography or group of interest). One example is DLUHC’s Supporting Families programme evaluation. This linked together existing data sources from multiple departments (many of which feed into official statistics) and Local Authority data, with additional primary data collection. 

However, our conversations highlighted other potential barriers to linking data in this way like inability to access data held by other departments securely, difficulty cultivating relationships within and across departments to get buy-in and data matching issues arising from lack of harmonisation. These barriers are not solely at departmental level but also individuals conducting or involved in evaluations. This shows the importance of combining top-down cross-government evaluation guidance with a bottom-up approach, starting directly with the people producing and using the data, to create the right conditions for successful evaluations. 

Although these are high-level findings, they highlight key questions that OSR can explore to support the use of official statistics in evaluation:  

  • Do statisticians consider key policy questions and the data needs of evaluation when developing statistical outputs? And do OSR regulators support them to do this? 
  • Are the data suitable for linking to other datasets? 
  • Is there effective analytical leadership in place to support finding, accessing and sharing official statistics for evaluation purposes? 

These are just some of the areas that we will explore arising from this work. What’s certain is that evaluation is only growing in importance and visibility across government and OSR can play a role in its success.