3. Sound Methods

an icon of a person with a lightbulb and cog floating around their head

 

The Code states that producers of statistics and data should use the best available methods and recognised standards and be open about their decisions. Adherence to these practices ensures that methods are sound.

3.1 Use of appropriate methods and processes

Indicator 3.1: Methods and processes are appropriate and based on national and international good practice, scientific principles or established professional consensus.

Statistics should be produced using appropriate methods and processes. There may be more than one available method that could be used, and different methods might be suitable for meeting different user needs.

Using methods that are based on national or international good practice, scientific principles or established professional consensus can help ensure that the methods are robust. It also aids comparability and consistency between sets of statistics that are estimating the same thing for different countries or similar concepts. As the UK no longer has a legal requirement to meet European standards, there may be more than one set of international guidance that the UK could follow. Equally, in some cases, there may be more than one method that is based on established good practice, or there may not be professional consensus around the best method. In these cases, producers should consider methods against the quality criteria which meet the needs of UK users. There should be a clear rationale for the choice of methods and processes.

This indicator is derived from the Code practice Q2.1. Both the IMF DQAF and ESS QAF include similar indicators around methodological soundness and the use of international good practice.

Example questions:

  • What methods and processes are used in the production of these statistics?
  • Are the methods and processes used appropriate for the intended uses of the statistics?
  • What national or international good practice, scientific principles or established professional consensus are the methods based on?
  • How were the methods and processes chosen?
  • What quality criteria were considered in choosing methods and processes?
  • What feedback have users given about the choices of methods and processes?
  • How does the producer ensure that outputs are produced in line with the stated methods?

3.2 Use of recognised standards, classifications and definitions

Indicator 3.2: Statistics, data and metadata are compiled using national and international recognised standards, classifications and definitions which are harmonised to be consistent and coherent with related statistics and data where possible.

As well as following good practice in methods, the use of recognised standards, classifications and definitions also improves the quality of statistics. This is particularly the case for economic statistics where the framework of National Accounts requires estimates of multiple concepts for the same industries or products and the same concepts for different sectors. The use of recognised standards, classifications and definitions makes this possible. Following international standards enables estimates to also be comparable between different countries.

This indicator is derived from the first part of the Code practice Q2.2. The practice has been split to distinguish between following recognised standards, classifications and definitions and the need to explain any deviations to users. International standards are emphasised for this framework due to their important role in economic statistics. Similar indicators are included in both the IMF DQAF and ESS QAF.

Example questions:

  • What standards, classifications and definitions are used to compile the data?
  • Are these recognised standards, classifications or definitions?
  • What feedback have users provided about the standards, classifications and definitions?
  • Are the standards, classifications and definitions consistent and coherent with related statistics and data?
  • How does the producer ensure that statistics are produced in line with the stated standards, classifications or definitions?

3.3 Explanation of reasons for deviations from standards

Indicator 3.3: Reasons for deviations from standards, classifications and definitions are clearly explained, including any implications for use of the statistics and data.

Whilst the UK was part of the ESS, it had a legal obligation to follow the international standards, classifications and definitions agreed in the legislation. Since leaving the EU, the UK has greater scope to deviate from these standards and innovate where this enables it to meet domestic statistical needs, while recognising for economic statistics there is often a strong demand for compliance with international standards. Maintaining trust in the resulting statistics will require clear explanations for any deviations and the reasons for them. The implications for the use of the statistics and data, including for international comparisons, will need to be made clear to ensure that statistics are used appropriately.

This indicator is derived from the second part of the Code practice 2.2. Whilst the ESS QAF requires the consistent application of standards due to its context in the production of European statistics, the IMF DQAF requires the overall structure to follow internationally accepted standards and for classifications to be ‘broadly’ consistent. Although this indicator does not require the UK to fully follow the international standards, classifications and definitions, it is not out of line with the international frameworks as the UK context now enables deviation where that better meets users’ needs.

Example questions:

  • Are there any deviations from standards, classifications and definitions?
  • What are the reasons for any deviations from standards?
  • Where are these reasons explained and how clear is the explanation?
  • Are the implications for the use of the statistics explained?

3.4 Transparency of methods

Indicator 3.4: Producers are transparent about the methods, standards, classifications and definitions used, giving the reasons for their selection. The level of detail is proportionate to the complexity of the methods chosen and reflects the needs of different types of users and uses. Published methods information is reviewed and updated whenever needed.

By clearly explaining the methods, standards, classifications and definitions used in production of statistics, including the reasons for their selection, producers enable their users to understand whether the statistics are fit for their purpose. Such explanations provide important information about the relevance, accuracy, coherence and comparability of the statistics. Different users will require different levels of detail and technical information, and these different audiences should be catered for. Producers could also consider different ways to explain the methods used to meet the needs of different audiences. The level of detail should be proportionate to the complexity of the methods. More-complex methods may require a greater level of detail. Published information should be reviewed and updated whenever there are changes to the methods, standards, classifications or definitions. Being transparent about this information can also help producer teams when there are changes in personnel.

This indicator is derived from the Code practice Q2.3 and has been expanded to specifically highlight standards, classifications and definitions. The need to keep methods information up to date has also been added based on experience from the pilot assessments. A similar indicator, indicator 6.4, is included in the ESS QAF: ‘Information on data sources, methods and procedures used is publicly available’.

Example questions:

  • Where are methods, standards, classifications and definitions described? Are relevant levels of detail readily accessible for different audiences, such that the information that they need to inform their use is clear?
  • Are the explanations transparent?
  • Is the level of detail proportionate to the complexity of the methods?
  • Do the explanations reflect the different types of users and uses?
  • When were the explanations last reviewed and updated?
  • In the course of the assessment, did we find that the latest published guidance or background information was out of date?

3.5 Explanation of limitations of methods

Indicator 3.5: Limitations of the methods and their application are identified and explained to users, including the effect on the statistics and their use.

Understanding the limitations of the methods being used to produce statistics is important for understanding their quality. The selection of methods will often involve balancing competing requirements such as accuracy, timeliness and ease of application and explanation. Identifying how these competing priorities have been balanced, and the constraints on the chosen methods, will help identify any limitations of the methods. These competing priorities and constraints should be clearly explained to users, including the effect on the statistics, to aid understanding about the fitness for purpose of the statistics for their use.

This indicator is derived from the Code practice Q2.4. Reference to bias and uncertainty has been removed as an indicator covering these concepts has been added in the Assured Quality principle of this framework. Explicit reference to limitations of methods is not included in the international frameworks.

Example questions:

  • Have any limitations of the methods and their application been identified by the producers, users or the OSR team?
  • Have limitations of the methods been explained the users, including the effect on the statistics and their use?

3.6 Advance notice of changes to methods

Indicator 3.6: Producers of statistics and data provide users with advance notice about changes to methods, explaining why the changes are being made. Users are made aware of the nature, extent and effect of the changes.

When changes are made to methods, users should be provided with advance notice so that they can understand the likely impact on their use of the statistics. It is important that users understand why the changes are being made and the nature, extent and impact of them. This will help users understand whether the statistics will still meet their needs. It also helps ensure that statistics are not misunderstood when they are released. For example, if the change in method affects the trend in the statistics.

This indicator is derived from the Code practice Q2.5 with the added provision that the change’s effect should be clearly specified alongside the nature and extent of the change added to the indicator. The requirement for a consistent back series, where possible, has been removed as a separate indicator covering this has been included. Both the IMF DQAF and the ESS QAF include indicators for providing advance notice of changes to methods.

Example questions:

  • Have any changes been made to the methods in recent years?
  • Were users provided with advance notice of changes?
  • Were reasons for the changes given along with the nature, extent and effect of the changes?

3.7 User feedback on changes to methods

Indicator 3.7: Producers seek and implement, where appropriate, feedback from users about changes to methods.

When making changes to methods, producers should seek feedback from users about the changes and the likely impact on their use of the statistics. Users of statistics will be best placed to understand the effect of the change on their use and their views should be sought and, where appropriate, implemented to ensure that statistics remain fit for purpose.

This indicator has been added to make the need to engage with users over changes to methods explicit and is in line with the Code practice V4.3, which states that users should be involved in the ongoing development of statistics and data, exploring and testing statistical innovations, so that the statistics remain relevant and useful. The ESS QAF includes indicator 15.6, ‘users are kept informed about the methodology of statistical processes including the use and integration of administrative and other data’.

Example questions:

  • Did the producer seek and implement, where appropriate, feedback about any methods changes?
  • Did the producer provide information back to users on the changes made as a result of user feedback?

3.8 Consistent time series

Indicator 3.8: Where a change in methods leads to a break in the time series, a consistent time series is produced, with back series provided where possible.

Innovation and changes in technology or the availability of data sources may lead to a change in methods. Where a change leads to a break in the time series, a consistent time series is often needed by users and should be provided where possible. This reduces the risk of changes in the series being attributed to ‘real-world’ change where they are due to a change in methods. Many users of statistics are interested in making comparisons of changes over time. The ability to do this is an important determinant of whether the statistics are fit for their purpose.

This indicator is derived from the middle part of the Code practice Q2.5. It has been amended with wording from ESS indicator 14.2.3 around breaks in the time series to make the issue explicit.

Example questions:

  • Has a change in methods led to a break in the time series?
  • If so, was a consistent time series provided, including a back series?
  • How was the back series constructed to ensure that it represented the ‘real-world’ change over time?
  • What were the constraints in developing the back series and what impact did these have?
  • If a consistent time series or back series was not provided, what were the barriers to providing it?

3.9 Collaboration to improve methods

Indicator 3.9: Statistics producers collaborate with topic and methods experts, the scientific and international community and producers of related statistics and data to improve methods wherever possible.

There is a wealth of knowledge and experience about the production and use of statistics that can help with developing and improving methods. By collaborating with topic and methods experts, the scientific community, the international community and producers of related statistics, producers can tap into this knowledge to improve their own methods wherever possible. Collaboration has the potential to improve the quality of statistics as well as potentially improving coherence and comparability, which will bring quality gains of their own. Each producer team should take responsibility for collaborating to improve methods. In larger organisations, such as ONS, central functions may also collaborate with the scientific and international community to establish general principles or develop new cutting-edge methods.

This indicator is derived from the Code practice Q2.6 with reference to the scientific community added from the ESS QAF indicator 7.7. Wording was also added to reinforce that collaboration can improve methods. This indicator is closely aligned with ESS QAF indicator 7.7, which states ‘Statistical authorities maintain and develop cooperation with the scientific community to improve methodology, the effectiveness of the methods implemented and to promote better tools when feasible.’

Example questions:

  • When has the producer collaborated with topic and methods experts and the scientific community over improvements to methods?
  • When has the producer collaborated with the international community over improvements to methods?
  • When has the producer collaborated with producers of related statistics over improvements to methods?

3.10 Up-to-date knowledge of developments

Indicator 3.10: Producers keep up to date with developments that might improve methods and quality. They assess the added value of potential improvements and evaluate the likely impact on the statistics, including in relation to comparability and coherence.

As a result of improvements to technology, increases in data collection and other developments, there may be opportunities to improve methods and quality. Producers should keep up to date with these developments to enable them to take advantage of new data or methods when they become available and may improve quality. The added value of potential improvements and effects on the statistics should be evaluated to ensure that statistics remain fit for purpose and meet users’ needs.

This indicator is derived from the Code practice V4.5. The ESS QAF includes a similar, indicator 7.7: ‘Statistical authorities maintain and develop cooperation with the scientific community to improve methodology, the effectiveness of the methods implemented and to promote better tools when feasible’. There is also an emphasis on innovation and improvement within the international frameworks. Keeping up to date with developments, which could improve quality, is one way of doing this.

Example questions:

  • How does the producer keep up to date with developments that might improve methods and quality?
  • How has the producer team assessed the value added of potential improvements and likely effects on the statistics, including in relation to comparability and coherence?
  • How has the producer team balanced completing priorities for improvements?

3.11 Independent internal and external review

Indicator 3.11: Producers seek independent internal and external review of their statistical methods and are open to identified areas for improvement.

Independent review of methods and processes can provide useful feedback on the quality of statistics and identify ways in which it can be improved. Independent review can take different forms, and we are not explicit about which ways should be used. It can include peer review, methods show-and-tell sessions, use of expert panels or formal quality reviews. In the context of UK economic statistics, openness to identified areas for improvement may include whether recommendations from the 2016 Independent review of UK economic statistics and the 2014 National Statistics Quality Review of National Accounts and Balance of Payment statistics have been implemented or whether the methods have changed following review by the Economic Statistics Centre of Excellence or an appropriate external technical panel. For some areas, there may be other specific reviews of relevance, such as the 2015 UK Consumer Prices review. These all provide opportunities to improve quality, and producers should be open to making improvements in areas identified as needing them.

This indicator is based on the Code practice T4.6 but is less specific about the types of independent review that could be undertaken. The indicator has also been made more specific about the reviews being carried out on methods and statistical processes so as to focus on the quality of the statistics. The ESS QAF includes a similar indicator, 4.4: ‘There is a regular and thorough review of the key statistical outputs using also external experts where appropriate.’

Example questions:

  • Have the statistics been subject to a formal independent review?
  • If so, what did the review find?
  • What other forms of independent review are undertaken (for example, peer review or expert panels)
  • What have these found?
  • Was the producer open to identified areas for improvement?
  • What were the barriers to implementing findings of independent reviews, peer reviews or the suggestions of expert panels?

Back to top
Download PDF version (446.08 KB)