Audit and inspection

There are certain circumstances in which regular, systematic external evaluation, audit or inspection of the underlying data is essential to increase both the quality of, and public confidence in, statistics produced from administrative data.

For statistics requiring higher levels of assurance, these external evaluations should be regular and repeated.

In the absence of regular repeated external scrutiny (for those sets of statistics for which it is appropriate) the statisticians should highlight the deficiency for users and investigate whether there are other sources that could be used.

Administrative data underpinning official statistics can be subject to, or feature in, various kinds of audit, depending on their operational context, for example:

  • financial audit – a forensic check of invoices and payments
  • clinical audit – an in-depth review of case records
  • statistical audit – sampling and following up of existing cases

These investigations can be used to provide context, and in some cases corroboration of the data quality. For example, regulators may carry out their own checks of the data supplier body’s activities that provide helpful insight into their effectiveness in managing information.

It is good practice to investigate whether these other types of audits could provide information to support the statisticians’ quality judgments, and to use them if appropriate. It is also recognised that in some cases these types of corroborating information are not available.

 

Official Statistics examples of audit and inspection:

 

Audit Example 1

DWPs Work Programme is the government scheme in Great Britain to assist people who are long-term unemployed into sustained employment. The service is provided by employment support organisations through 40 contracts with 15 prime providers. These providers work with a larger number of sub-contractor organisations. Claimants are randomly assigned by the local Job Centre Plus to a prime provider in their area.

The providers are paid when the claimants complete defined periods in work (usually after 6 months, but 3 months for those that are hard to place e.g. ex-prisoners) receiving the job outcome payment. The providers then can receive further payments for each additional continuous 4 weeks in employment – i.e. sustainment payments.

Automated check

All Job Outcome payment claims are subject to an off-benefit check before payment. This involves an automated check to match participant information on DWP’s Customer Information System, to ensure that participants for whom Job Outcome payments are claimed are not claiming benefit.

The automated off-benefit check has a window of 15 days in which the check is applied, to allow for minor discrepancies between the details of the provider’s claim and the details on Departmental systems. Job Outcome payment claims that fail this automated check are removed from the system (unless they can be validated) and not paid. Claims which pass the off-benefit check are released for payment, and are then subject to further post payment in-work checks.

Post-payment check on a random sample

Post-payment validation is performed every month to strengthen the controls against fraud and error in the Job Outcome payments reported to DWP by the Work Programme providers. This process involves selecting a sample of 17 claims per contract from the total population of Job Outcome payments that passed the automated off-benefit check and that were subsequently paid in that month. The results of 6 rounds of validation, one for each month, are brought together every six months (April-September, October-March) to provide validation rates.

Corroboration against another data source

The sample is matched against HMRC P45 data to validate employment. Those that fail the HMRC check are validated by confirming employment with either the employer or the individual. Job Outcome payments that are found to be invalid are used to calculate the error rate which is extrapolated from the total population. The results of six rounds of validation (one for each month) are brought together every six months to provide quarterly error rates used in the National statistics.

The primary purpose of the error rate is to extrapolate financial recoveries against all payments made to a contract in the extrapolation period based on the error rate, rather than for the sampled claims alone. Once the percentage of error has been calculated from the sample, the error rate is applied to the total paid to providers for the relevant six month period, and the provider is then required to pay this back to the Department.

Data processing – adjustment for error

The error rates for the 40 contracts are used to derive adjustment factors which are then used to rate the official statistics to reflect final Job Outcome payments made to providers. The adjustment factor is derived using the number of the Job Outcomes which fail the post-payment validation process divided by the total number of Job Outcomes sampled. This ratio is applied to Job Outcomes (less the sample and those already validated) to adjust the official statistics.

Feedback check with data suppliers

Once the validation process has been completed, Work Programme providers have the opportunity to challenge its results. Time is allowed for providers to challenge and for DWP to assess and arbitrate any challenge. This process can take up to approximately three months, so that the official statistics on Job Outcomes for some providers may be revised slightly in the following quarterly release. DWP says that the affect of these revisions have so far been minimal.

The end to end post-payment validation process takes approximately 8 and a half months to complete. The routine sampling, checks and production of error rates take just over 1 month and these are performed on the previous six months Job Outcomes payments.

 

More information about the quality checks is available in the Background Information Note for the Work Programme. Will Oliver is the lead statistician for DWP’s Work Programme statistics and is happy to speak with anyone wanting to know about their approach to quality assurance. Will can be contacted by email.

 

Audit Example 2

NHS Englands childrens patient safety incident reporting cases

The NHS Outcomes Framework in 2014-15 included three indicators that measure reported patient safety incidents. The National Reporting and Learning System (NRLS), managed by NHS England, collates reported patient safety incident statistics. These statistics are known to be incomplete as they cannot capture information about incidents that remain unreported. Incidents with a degree of harm of ‘severe’ and ‘death’ are now a mandatory reporting requirement by the Care Quality Commission (CQC), via the NRLS, but under-reporting is still likely to occur. Adverse events in healthcare cannot be completely eliminated; however, there is a need to learn from events when they occur. NHS England says that with the introduction of the NRLS, reporting is improving. Maximising the potential to reduce incidents will be supported by continued improvements in reporting.

In developing the NHS Outcomes Framework, DH and NHS England sought to address patient safety across different patient groups and recognised children as a particularly vulnerable group. NHS England, therefore, built on a suggestion by the NPSA to consider identifying an indicator derived from the NRLS which could capture ‘failure to monitor’ specifically for children.

National Patient Safety Agency (NPSA) research published in 2007 outlined that around 11% of deaths in hospitalised patients were related to patient deterioration. National Institute for Health and Care Excellence (NICE) clinical guideline CG50 relates to acutely ill patients in hospital, and shared learning on implementation of this includes “improving the detection and response to patient deterioration.”

A key element of how deterioration causes harm is because of a ‘failure to monitor’ which includes (but is not limited to) failures of care such as ‘not taking observations’ for a prolonged period, no recognition of the early signs of deterioration and a delay in the patient receiving medical attention. This formed indicator 5.6 in the NHS Outcomes Framework.

A review of 50 cases found that the indicator is not capturing the intended type of incidents it originally set out to capture. The case review found the following:

  • Only 3 incidents referred to failure to monitor in the sense intended by this indicator (physiological signs and symptoms of deteriorating condition not taken or not recognised or not acted on)
  • 11 could possibly be counted as failure to monitor if the definition was stretched as far as possible (some sense of some aspect of their condition not being recognised promptly e.g. a pregnant 16 year old who may have been in early labour but did not get appropriate vaginal examination)
  • 35 had nothing to do with child deterioration (most often referred to pressure injuries from devices or IVs displaced, but there were a miscellaneous collection of items)
  • 1 was clearly not a child although their age was recorded as 0.25 years.

In light of these findings a thorough review of the indicator methodology was carried out. It led to the Department of Health deciding to remove this indicator from the NHS Outcomes Framework for 2015-16. A decision to publish the statistics requires the user need for information needs to be balanced against the quality of the data.

In this case, the data quality is insufficient to support the appropriate use of the statistics.

 

Audit Example 3

HSE Non-fatal injury statistics

Administrative data on specified fatal and non-fatal injuries, occupational diseases and dangerous occurrences are collected under RIDDOR (Reporting of Injuries, Diseases and Dangerous Occurrences Regulations 1995). HSE stores injuries information on an operational database for its area of enforcement. The database is maintained continuously. HSE combines data from its database with data provided by LAs and ORR to produce the RIDDOR injury statistics.

The regular review of injury records by the HSE inspectorate provides one type of audit of the injury data. Two other types of audit have been conducted by or on behalf of HSE: internal audit; and the statistical audit of non-injury data. HSE also compares the reporting of non-fatal injuries obtained through RIDDOR with the self-reported survey data from the Labour Force Survey (LFS). It commissions these survey questions to gain a view of work-related illness and workplace injury based on individual’s perceptions and also presents these statistics in its annual statistical outputs.

Internal audit

In March 2012, HSE’s Internal Audit team reviewed the RIDDOR system. This review was initiated by the statistics team following the transfer of responsibility of the injury notification collection system to the team in September 2011. The Internal Audit team’s review focused in particular on how the process of reporting of fatal and major injuries was working and examined: wrongly allocated reports; backlogs of unallocated reports; the clarity of guidance documents in relation to reporting of incidents; the clarity of information provided in some aspects of the reports; and, the experience of local offices and HSE switchboard in responding to enquiries.

The team identified some areas requiring improvement. Following the audit, the statistical team:

  • introduced some improvements to the online reporting form;
  • provided guidance to assist the completion of the form;
  • changed the review process to determine whether a incident was reportable or not; and,
  • worked with the front line staff to understand better the potential impact of malicious reports on the injury statistics.

Internal Audit also conducted a follow up review in April 2013 to assess progress in addressing the required improvements and determined that appropriate actions had been taken.

Statistical audit of non-injury data

HSE commissioned a survey to check the information recorded on RIDDOR by speaking with the injured employees about the event. It enables HSE to better understand the issues that impact reporting on non-fatal injuries and the potential biases that occur as a result, as well as to provide information on the amount of time taken off work for reporting to Eurostat.

Based on the information obtained from interviewing around 2,000 injured people from a random sample of records of non-fatal injuries reported by employers, the survey found that:

  • For injuries reported as major by employers:
    • 90% were confirmed as major
    • 10% were found not to reach the threshold for a major injury (that is, were over-reported)
  • For injuries reported as over-7 day injuries by employers:
    • 60% were confirmed as over-7 day
    • 23% were under-reported and subsequently found to be major injuries
    • 17% were over-reported (i.e. were below the threshold required)

Overall, however, the survey concluded that the method was sufficiently rigorous to produce robust estimates of the average number of working days lost to workplace injury per worker to meet Eurostat’s needs.

 

David Leigh is the lead statistician for HSE’s injury statistics and can be reached by email.