3. What do we mean by comparable statistics and data?
Comparable vs coherent statistics and data
The terms comparability and coherence are often used interchangeably but are different, though closely related concepts.
Coherence has several potential definitions which can be seen as building on each other:
- understanding and explaining ‘internal consistency’ within a single source at different levels of aggregation (European Statistical System)
- the degree of similarity between related statistics and the fuller insight achieved by drawing them together (Code of Practice for Statistics, Edition 2.1)
- where the extent of similarities or differences for statistics and data that measure the same concept, but are produced by different producers and use different sources and methods, are fully documented and explained (Fraser of Allander Institute).
On comparability, the Fraser of Allander Institute noted that comparability can be seen as a spectrum when comparing data across the four UK nations, or that it might relate to the degree to which data can be compared across different geographies over time. Within this comparability spectrum (Annex A), this work also identifies the concept of ‘meaningful comparability’ (which is less stringent than ‘full’ or ‘direct’ comparability).
‘Meaningful comparability’ means the minimum level of comparability that is needed for drawing firm conclusions and for shaping action.
‘Meaningfully comparable’ statistics and data may be collected and produced separately across different geographies and have definitional or methodological differences. Crucially, determining whether different statistics and data on common topics are ‘meaningfully comparable’ often requires a case-by-case assessment of differences between statistical sources on common topics, and a sound understanding of the user needs and comparisons that are to be made.
Using this framework, other sets of related statistics may be ‘conceptually comparable, but not yet coherent’ or ‘conceptually comparable and coherent’. But neither of these would be meaningfully comparable. For example, local government finance statistics are produced separately by producers in England, Wales and Scotland and are conceptually consistent. However, limited work has been carried out by statisticians to determine the extent of meaningful comparability of each separate set of statistics with each other.
Similarly, fuel poverty measures are produced separately by each UK nation and are based on various sources and methods. As such, indicators measuring similar concepts are available, and an analysis of their coherence has been completed and explained to users. However, they are not meaningfully comparable, and it is not possible to produce a UK figure.
The Code of Practice for Statistics, which sets the standards that producers of official statistics should commit to, refers to both coherence and comparability:
- Q1.4 “Source data should be coherent across different levels of aggregation, consistent over time, and comparable between geographical areas, whenever possible.”
- Q3.3 “The quality of the statistics and data, including their accuracy and reliability, coherence and comparability, and timeliness and punctuality, should be monitored and reported regularly…”
- V3.3 “Comparisons that support the appropriate interpretation of the statistics, including within the UK and internationally, should be provided where useful. Users should be signposted to other related statistics and data sources and the extent of consistency and comparability with these sources should be explained to users.”
Given the potential overlap in concepts and definitions related to statistical coherence and comparability, there is a risk of ambiguity for statistics producers and users regarding initiatives aimed at delivering improvements. There is also a danger that multiple definitions lead to confusion and work that is not aligned.
OSR has noticed that the focus of the Government Statistics Service (GSS)’s work is primarily on ‘coherence’ – understanding and explaining the differences between existing statistics. This coherence work has taken prominence over producing new, meaningfully comparable UK-wide statistics across a range of priority measures, which can be difficult to achieve, particularly where different UK data sources on common topics are well established and based on different definitions which reflect the distinct policy priorities of devolved governments. However, it is important to recognise that collaborative GSS coherence work has led to the creation of new UK-wide data on certain priority areas and to the development of a UK-wide comparable data workplan.
The GSS coherence work is important and leads to new insights on topics. It provides essential guidance to support users in making appropriate comparisons and in avoiding inappropriate comparisons. However, an improved understanding of the extent of coherence alone does not necessarily improve the availability of meaningfully comparable UK-wide statistics and data.
Statistics on some topics are not comparable as an indirect result of devolution of policy decisions. There is value in statistics and data that meet local needs. However, to truly serve the wider public good, UK citizens might reasonably expect UK statistics and data on a range of priority topics to be comparable enough to understand outcomes, draw firm conclusions and to hold their respective governments to account. A further key benefit of comparable statistics and data is the ability to identify where an approach taken in one area is working well, so that it can be replicated elsewhere. We heard user views in these areas at the UK Statistics Assembly this year.
Back to top