Dear Ed,

Thank you for your email letter. Although I fully accept that we are both concerned to achieve improvements in our statistics, we nevertheless consider it necessary to take issue on a number of points in your letter, particularly the OSR’s recent review of the CIS. However, it remains our concern to avoid public controversy if possible and I hope you will consider that the suggestion in the conclusion below represents a constructive way forward. Meanwhile, this reply follows your headings.

Public Confidence in Official Statistics

Whilst we appreciate that you revised your report to correct for the potentially misleading statements originally made by the ONS, we would respectfully point out that the error in reporting we had referred to is almost universal within ONS reports. For example, I had written on this subject to the BICS ‘team’ in May 2020, as indicated by this blog. Although they subsequently improved the sample and estimation process for the BICS, they never addressed the issue of reporting the results in a potentially misleading manner. Indeed, the recently published blog comparing HMRC tax data with results from ONS enterprise research represents a prime example of over enthusiastic reporting. This is not to say that the work lacks merit, but it supports our view that there is a need for a review of reporting practice and, possibly, a revamp of training in report writing is also required.

Population estimates

We note your comments on this issue and I think we must put our differing opinions down to different perspectives!

The questions that therefore arises are what is the purpose behind your report, and to whom is it primarily directed? We were looking at it more from the ‘user’ point of view and, I now imagine, you are looking at it from the perspective of reporting to parliament. Our view is that the UKSA has not served the public well in respect of Coventry and a few other cities and that there has been a failure to publicly recognise that fact, although Sir Ian Diamond did do so in his recent plenary speech to the BSPS conference in Winchester. To better understand the ‘user’ perspective, please see Merle Gering’s history of the CPRE / ONS dispute.

CDEI Tracker Study

We have had significant engagement on this tracker research with Savanta and with representatives of CDEI, although possibly too late to impact significantly on the second wave report, which is due to be published shortly. Specifically, we have undertaken special analysis of response rates, in an attempt to improve future representativeness and discussions continue accordingly. We have found the research team to have been completely open with providing relevant information, withholding none of the details that we have requested. We understand that they intend to recognise our assistance in the wave 2 report when published.

CIS

There appears to have been some minor misunderstanding because we had not understood Anna Price’s email to have been your final word on this subject. Now having understood that we have responded directly to Anna, as copied with this letter. More importantly, the OSR’s review of the CIS published on 30th August has extended our concerns to now questioning the effectiveness of the OSR itself, because we are unable to reconcile the information we had provided to your investigators with Mr. Pont’s report. For example the comment in Anna’s email that “We agree that the explanation of whether the household size variable was and was not included in analyses was not very clear” fails to acknowledge the persistence with which we had requested analysis by that variable from the CIS team, which analysis was never supplied and is still not available.

As a result the key questions we have posed concerning the CIS remain unanswered and we append a document intended to more clearly explain our concerns. This includes some details reproduced from the original Lancet article published in December 2020, as referred to in the recently published QMI which Anna refers to. In doing so, we wish to emphasise that we do not doubt that the CIS has been helpful in providing early warning of the spread of infection and otherwise providing guidance to policy makers. We simply believe that, possibly, there could have been improvements in the estimation methods applied by firstly using a rim weighting process to obtain best National estimates and then applying the Bayesian Model for the detailed infection rates by area. More importantly, we consider that the lack of transparency of the estimation methods represent a failure to satisfy that pillar of the code of conduct.

It is also the case that there has been a lack of transparency concerning other aspects of the study, including some of the reporting and details of the sampling and recruitment methods applied. Details of which were also covered in the evidence provided to the OSR team by Better Statistics and whish appear to have been ignored.

We would therefore claim that a regulatory system that fails to address these concerns is unable to properly judge Quality and I hope you will accept that this is a less than satisfactory state of affairs. For that reason we are considering requesting the Royal Statistical Society and / or the National Statistician’s Expert User Advisory Committee (NSEUAC) if they will conduct a special investigation of our communications with the CIS to review whether the ONS and the OSR have correctly fulfilled their remit.

Possibly we can discuss what would be the most appropriate course of action?

Publishing Correspondence:

We will send you a PDF of the proposed file of correspondence prior to publishing. We would expect to include the correspondence with you and the emails we had sent to Anna and her replies.

Value for Money:

You have not commented further on this issue, following your previous assertion that you consider the existing code of practice covers this adequately. Again we must respectfully disagree. We had drawn attention to clauses in the IQVIA contract which it appears have never been implemented. Certainly our request for information on those clauses have been ignored by the ONS and now by the OSR which fact implies to us that something further is required on this subject within the code of practice.

Future Progress:

As noted above, we share the objective of improving our statistics. Although we approach the task from different perspectives with your aim being to ensure that statistics shall be ‘for the public good’ our aims are to “promote public awareness of, and interest in, the production of accurate and relevant statistics”. Evidently, the Code of Practice is a primary topic of common interest for both the OSR and Better Statistics, and I would venture to suggest that we are both concerned that the code should be:

  • As effective as possible,
  • Honoured by all institutions responsible for producing statistics
  • Recognised, understood and supported by as wide a public as possible

Accordingly can I suggest that the OSR and Better Statistics should convene a one day seminar for later in January 2022 with the theme of examining each element of the code to discuss possible improvements both to the code and to the regulatory regime.. To be effective we should invite interested papers / proposals from the GSS, ONS, Opinion Polling Companies, Commercial Research Companies, ‘Big Data’ Tech companies and others with an interest in this topic.

As an aside I was interested to read the recent report on the “UK-wide public dialogue exploring what the public perceive as ‘public good’ use of data for research and statistics” and would strongly endorse the plea for Clear Communications. I was unable to note any reference to the code of conduct in the report, however, and yet I believe the code applies equally to administrative data as to any other source of statistics.

Hopefully we can work together to further the objectives as outlined above. Yours sincerely,

Tony Dent
On Behalf of Better Statistics CIC


Appendix: Methodological concern including comments on Lancet report of 20th December 2020

Background

Following a review of the QMI published in July 2021, including the response rates by various demographics as published in that document, we wrote to Kara Steel and Byron Davies on 7th August 2021 to ask:

“Are the response rates by household size published for the infection survey?

Is there any adjustment for size of household made in the weighting or is it only other demographics used?

Are there any examples of the weighting process?

Thank you for your kind consideration”

On 17th August we received the following reply from the Survey Analysis team:

“Thank you for your email.

The COVID-19 infection survey is based on a nationally representative survey sample; however, some individuals in the original Office for National Statistics (ONS)

survey samples will have dropped out and others will not have responded to the survey.

To address this and reduce potential bias, the regression models adjust the survey results to be more representative of the overall population in terms of age, sex and region (region was only adjusted for in the England model).

The regression models do not adjust for household tenure or household size.”

On 18th August we asked

“Is there a particular reason as to why neither household tenure nor household size were included in the Bayesian model?”

Household size in particular is likely to have affected response, as indeed is tenure insofar as I would hypothesise a lower response rate from single person households in relatively poorer communities.”

On 24 August 2021 we received the following reply:

We undertook testing to adjust for household size in our modelling but found that this had a negligible impact on our estimates.

In regard to tenure, we do not collect this information as part of the COVID-19 Infection survey, therefore, including this would only have been possible for the participants who have been sampled from other ONS surveys where this information is collected.”

Following further correspondence, on 21st September 2021 the analysis team re-emphasised:

“We undertook testing to adjust for household size in our modelling but found that this had a negligible impact on our estimates.

Additionally, we have found no significant difference in response rates by household size. We have no further published information on household size.”

We note that the original reason for not including household size in the weighting model was that the variable had only a negative effect on the data analysed at that time. As a result the Bayesian weighting method decided upon did not enable subsequent use of household size as a weighting variable to correct for any bias on that variable. See also Comment below.

The Lancet report

The report provides a description of the approach that has been used to estimate the infection rate in various parts of the UK since December 2020. As far as we can tell there has been no further analysis undertaken to review the work done at that time, despite the greatly increased volume of information that has subsequently been obtained.

Better Statistics wish to emphasise the following points:

  1. We entirely accept the following statements from the introduction to the Lancet report: “An important strength of our population-based study is that it can detect increases in the SARS-CoV-2 positivity rate potentially earlier and more systematically than can surveillance based on confirmed cases, hospital admissions, or deaths.”

and

“This advantage of our study will be most useful when new increases in SARS-CoV-2 positivity initially occur in a subgroup of the population at low risk of hospitalisation and death, but whose infections still contribute to transmission (including if asymptomatic)”

However, we note that this latter effect is dependent upon the sample representing such subgroups in some way.

  1. We also accept the requirement for a weighting procedure based upon post-stratification of the sample. To achieve that the researcher(s) will have analysed the sample data available by as many independent variables as possible to determine those variables responsible for the greatest variance in measurement of the spread of the infection. Only those variables with acceptable population numbers available as control variables are suitable for this analysis. There is evidence that this work was done but none of the details have been published.
  2. Having determined the parameters that can be used for the post stratification weighting there is then the choice of the methods to be used. We agree that there is insufficient control data available to use the preferred method of target weighting and therefore some form of iterative procedure was required. However, for clarity we would prefer to have had details of the actual weighting procedure used, possibly illustrating the effects on some example records.

Nature of our concern:

The data available for the analysis reported in the Lancet review is derived from the following:

  • The initial sample of respondents were obtained by re-contacting respondents to the Labour Force Survey . Respondents were recruited in the period April through June
  • A random selection of addresses from the Ordnance Survey AddressBase file, recruited from June through November 2020.

In both cases recruitment consisted of a process through which survey participants were firstly invited to telephone IQVIA to register their interest and then visited by an interviewer to complete the recruitment questionnaires and to provide the first ‘swab’. Following that initial visit, there were regular visits to provide further swabs for testing for Covid-19.

For clarity we here reproduce the summary of findings and the interpretation as provided in the Lancet report:

“Findings

Between April 26 and Nov 1, 2020, results were available from 1 191 170 samples from 280 327 individuals; 5231 samples were positive overall, from 3923 individuals. The percentage of people testing positive for SARS-CoV-2 changed substantially over time, with an initial decrease between April 26 and June 28, 2020, from 0·40% (95% credible interval 0·29–0·54) to 0·06% (0·04–0·07), followed by low levels during July

and August, 2020, before substantial increases at the end of August, 2020, with percentages testing positive above 1% from the end of October, 2020. Having a patient-facing role and working outside your home were important risk factors for testing positive for SARS-CoV-2 at the end of the first wave (April 26 to June 28, 2020), but not in the second wave (from the end of August to Nov 1, 2020). Age (young adults, particularly those aged 1724 years) was an important initial driver of increased positivity rates in the second wave. For example, the estimated percentage of individuals testing positive was more than six times higher in those aged 17–24 years than in those aged 70 years or older at the end of September, 2020. A substantial proportion of infections were in individuals not reporting symptoms around their positive test (45–68%, dependent on calendar time.

Interpretation

Important risk factors for testing positive for SARS-CoV-2 varied substantially V between the part of the first wave that was captured by the study (April to June, 2020) and the first part of the second wave of increased positivity rates (end of August to Nov 1, 2020), and a substantial proportion of infections were in individuals not reporting symptoms, indicating that continued monitoring for SARS-CoV-2 in the community will be important for managing the COVID-19 pandemic moving forwards.”

Comment:

We have highlighted in bold above the finding that the factors influencing the spread of disease had varied significantly between the first wave and the second wave of the pandemic. We hypothesise that an important difference between the two phases was that society was in lockdown in the first wave but not during the early part of the second wave. The effect of that would have been to significantly reduce the influence of household size on the data in the first wave because all children and non-essential workers were kept at home. Hence the finding of the relative vulnerability of those who were not able to work at home, particularly those working with patients.

We wish to understand whether a ‘dummy variable’ had been applied in the regression to distinguish those measurements obtained during lockdown from those when freedom of movement was available.

However, it is not our purpose to criticise the initial estimation of the national incidence of infection from the survey data available at that time. We appreciate the urgency of the requirement and the complexity of the task to provide results at the required granularity. Moreover, our concern that the details of the process have not been provided is, also, not our primary issue.

The more serious point is that, as far as we are aware, the matter of the estimation of the incidence data has never been revisited subsequent to the Lancet report, despite the increase in information that became available. For example an important element of the survey was that more than one member of a household could be recruited to the survey, depending on their willingness to participate. The initial work had decided that interaction within household did not need to be considered in the estimation – would the same conclusion be drawn today?

Although we can accept that estimates of the population by household size are not available in the detail required for the chosen Bayesian MRP model, estimates of that variable are available with a granularity sufficient for a rim weighting method. No evidence has ever been provided to support the choice of the Bayesian method and we are interested to understand what alternatives were considered?

Better Statistics had believed that it would be part of the remit of the Office for Statistics Regulation to have reviewed this matter, otherwise we fail to understand how any conclusion can be drawn on the Quality pillar of the code. Equally, we had expected the OSR to have specifically reviewed the issue of Value for Money of a study that has cost more than £1 Billion. Yet both factors appear not to be part of the OSR’s remit.

It is difficult to maintain trust in Statistics when the Regulator seems reluctant to regulate and thereby risks a loss of Trust. In these circumstances it is our opinion that the role of the regulator requires urgent examination.

Meanwhile it is worth noting that the Reproducible Analytical Pipeline initiative could have provided the opportunity to mitigate some of the above concerns, had it been properly deployed for the CIS. Full access to that would have allowed the technical issues to have been explored without the need for the CIS team to be distracted by our enquiries.


Related links