NEU to Ed Humpherson: Comparisons of the proportion of good and outstanding schools over time

Dear Mr Humpherson,

I am writing because I am concerned about the misuse of Ofsted statistics by the Department for Education and by the Secretary of State for Education Gillian Keegan, both in a press release and on social media.

In a press release titled “First step towards introducing the Advanced British Standard” and in a post on Gillian Keegan’s account on X (formerly Twitter), the DfE and Gillian Keegan claimed “89% of schools being rated good or outstanding by Ofsted, up from just 68% in 2010” and “89% of schools are now rated good or outstanding, up from just 68% under Labour”, respectively.

This statistic is misleading, as it suggests to the reader that school level Ofsted ratings are comparable between these two time periods.  Due to changes in the school inspection framework, there is a substantial difference in the meaning of a ‘good or outstanding’ school in this period.  Suggesting that these statistics are comparable is a violation of several of the principles of the Code of Practice for statistics, including T3.8, which requires that policy, press, and ministerial statements meet basic professional standards of statistical presentation, and several points of the quality and value pillars, including Q1.4, Q1.5, Q1.7, Q3.3, and V3.2, which concern the comparability of statistics.

The statistics which these statements refer to were reported by Ofsted in the annual reports for 2010 and 2023 and refer to the proportion of schools rated good or outstanding at their last inspection as of August 31st of each year.  Between 2010 and 2023 there have been several changes to the inspection framework; the most notable was in September 2012 under the coalition government.  This introduced the current system of judgements where schools are rated from 1 to 4, with the grades corresponding to ‘Outstanding’, ‘Good’, ‘Requires Improvement’, and ‘Inadequate’.  Prior to this change, the grades corresponded to ‘Outstanding’, ‘Good’, ‘Satisfactory’, and ‘Inadequate’.

This change led to a redefinition of the second and third categories; while the second category retained the ‘Good’ descriptor, the actual criteria to meet this category changed due to the reclassification of ‘Satisfactory’ to ‘Requires Improvement’.

The impact of these changes can be demonstrated by comparing Ofsted grade descriptions for inspections from January 2012 to those for inspections from September 2012.  Prior to the framework change, the 3 (satisfactory) grade descriptor for overall effectiveness included details on several areas of school performance and was similar in length to the descriptors for good and outstanding schools.  It read, in part: “pupils and groups of pupils have a generally positive experience at school and are not disadvantaged as they move on to the next stage of their education, training, or employment”.

The September 2012 description of a grade 3 “requires improvement” school was simply: “The school requires improvement because one or more of the four key judgements requires improvement (grade 3) and/or there are weaknesses in the overall provision for the pupils’ spiritual, moral, social, and cultural development”.  Within the individual key judgements, the ‘requires improvement’ grade description is that a given area “requires improvement as it is not good”, but does not provide further detail, with the one exception in leadership and management:  “Leadership and/or management require improvement because they are not good but are demonstrating the capacity to secure improvement in the school”.

The good, outstanding, and inadequate descriptions are more detailed.  In some cases, the description for a good grade in September 2012 overlaps with the description for a satisfactory grade in the January 2012 document.  In at least one instance standards for a September 2012 good school are lower than those of a January 2012 satisfactory school.  In the September 2012 document a good grade for pupil attainment requires that “Taking account of their different starting points, the proportions of pupils making and exceeding expected progress compare favourably with national figures.  Where the proportion making expected progress overall is lower than that found nationally, it is improving over a sustained period”, while in January 2012, even a satisfactory grade required that “Pupils are progressing at least as well as all pupils nationally given their starting points.”  Similarly, in January, a satisfactory description read that “In exceptional circumstances, where attainment, including attainment in reading in primary schools, is low overall, it is improving over a sustained period”, while a good description in September 2012 made no mention of this being exceptional and simply read “Where attainment, including attainment in reading in primary schools, is low overall, it is improving at a faster rate than nationally, over a sustained period”.  Obviously, in this case, elements of the previous good and satisfactory descriptors have been combined to create a new, broader good category, while the new requires improvement category is simply characterised as lacking good traits.  This leads to the classification of schools that would be graded satisfactory under the previous framework to be graded good under the new framework, inflating the proportion of good schools.

This is evidenced by the graph below, which was compiled from two sources of Ofsted data on the proportion of schools at each grade level at their most recent inspection as of 31 August of a given year.

 

The increase in the proportion of schools rated good or outstanding between 2010 and 2023 is primarily due to an increase in the proportion of schools rated good and decrease in schools rated requires improvement between 2012 and 2016, corresponding to the adoption of ‘requires improvement’ as opposed to ‘satisfactory’ as the grade 3 descriptor for overall effectiveness.  This shift took several years to become fully evident as not all schools are inspected each year; depending on their previous grading and a risk assessment, they are generally reinspected every 2-5 years.  Once most schools had been inspected under the new framework, around 2016, the proportion of schools rated good or outstanding remained relatively stable.  Throughout the period of interest, the proportion of schools judged inadequate remains virtually unchanged at 2 – 4 %.  The proportion of schools judged outstanding also remains largely stable at about 20%, although in recent years this proportion has fallen to 16% after the policy of exempting outstanding schools from reinspection was ended.  This has corresponded with a slight increase in the proportion of schools graded good, but little impact is evident in the proportion of schools rated requires improvement or inadequate.

This provides additional support for the argument that the increase in the proportion of ‘good or outstanding schools’ is because the meaning of ‘good’ changed in 2012 rather than because of “our fantastic teachers and the evidence-based reforms we’ve taken”, as Gillian Keegan claims in her post on X.  She goes on to claim that the conservative government is “delivering high and rising standards in our schools”.  This evidence suggests that, in the case of ‘good’ schools, the government has in fact lowered, not raised, standards, thus artificially inflating the proportion of schools rated good or outstanding.  I therefore request that you investigate these statements and provide guidance on whether they are misleading, and whether Ofsted statistics of this sort can be compared over time.

Yours sincerely

General secretary

National Education Union

Ed Humpherson to NEU: Comparisons of the proportion of good and outstanding schools over time

Dear Daniel 

Comparisons of the proportion of good and outstanding schools over time 

Thank you for your letter on 18 April regarding the comparison of the proportion of good and outstanding schools over time by the Department for Education (DfE). You raised several points about the comparison between 2023 and 2010, and whether this should be considered misleading.  

We agree that there are a number of factors that have likely influenced the number of good and outstanding schools over time. These include changes made to the inspection frameworks as outlined in your letter, as well as other factors such as the impact of Covid-19 and the exemption of outstanding schools from inspections between 2012 and 2020.  

It is our view that the proportion of good and outstanding schools can be a useful indicator of school performance over time. However, as with any high-level comparison, the nuanced nature of any changes may not be fully reflected when used across an extended timeseries. 

We have reviewed the information provided in Ofsted’s methodology information for state-funded schools inspections and outcomes. This document notes that “The overall effectiveness judgement has remained broadly comparable across the different frameworks since 2005” but also that “A range of factors affect both how and when we inspect and limit the comparability of inspection outcomes over time”. It is our view that these messages could appear to be conflicting to users. 

We welcome the updated line Ofsted has now published in the methodology information and hope that this will support future communications of these figures when comparisons are made over long periods. If you have any additional concerns or if you would like to discuss this matter further, then please contact us.  

Yours sincerely  

 

Ed Humpherson
Director General, OSR 

 

Ed Humpherson to Phillip Oppenheim: The UK’s standing in the tech and life sectors.

Dear Phillip,

Thank you for raising your concerns around statements made by the Chancellor of the Exchequer, Jeremy Hunt MP, on the UK’s standing in the tech and life sectors, as well as economic growth.

The Tech sector now sits with the Department for Science Innovation and Technology (DSIT), who informed us that the press release was produced by Burlington, a PR agency in collaboration with the Digital Economy Council (DEC) and Dealroom. The underlying data can be found on Dealroom via this link, based on the “Ecosystem value” column. We consider the specification of the claim could have been made clearer as market valuation is not necessarily synonymous with economic activity but has been  used by the Chancellor interchangeably.

HM Treasury (HMT) has explained to us that the Chancellor’s claim that the UK has the “best Life Sciences sector in Europe” was based on the equity finance raised by the life science industry, in which the UK was third to USA and China. The underlying data can be found in the Life Science Competitiveness Indicators (LSCI) 2022 data table, in Tab 27. However, we note that the UK has varied performance on other indicators and does not appear to be the best in Europe across categories. Again, we consider that the statement would benefit from being clear on the specific measure being used to make the claim.

You also raised concerns around the Chancellor’s claim that the UK was “the fastest-growing large European country, not just since the pandemic but since Brexit and since 2010”. This statement was unclear, containing ambiguity around ‘large’, time references, and the data source. However, assuming ‘large’ refers to G7 membership, Brexit refers to the year 2016, and the pandemic refers to the year 2020, the claim can be verified using Organisation for Economic Cooperation and Development (OECD) GDP data. Please note the OECD’s data is provided in in GDP levels, given in constant prices, OECD base year (2015).

Since 2010
Since Brexit (2016)Since COVID (2020)
France13.8%7.0%9.0%
Germany17.7%5.9%5.0%
Italy3.2%5.4%12.3%
UK20.9%7.6%13.4%

HMT informed us that claims on international comparisons are usually based on quarterly GDP data  from the relevant National Statistics Institute for each country and were subsequently referenced in the Spring Budget 2024 Data Sources document.

Transparency and clarity support public confidence in statistics and minimise the risk of misinterpretation of statistics and data. We are continuing to work with HMT to ensure future communications are clear and comply with our expectations of intelligent transparency.

Once again, thank you for contacting the Office for Statistics Regulation.

Yours sincerely, 

Ed Humpherson
Director General for Regulation

 

Related Links:

Phillip Oppenheim to Ed Humpherson: The UK’s standing in the tech and life sectors. (January 2024)

Phillip Oppenheim to Ed Humpherson: The UK’s standing in the tech and life sectors.

Dear Sir/Madam

I would appreciate you checking a claim made frequently by government ministers and the Chancellor about Britain’s status in tech and life sciences

I attach my research which is part of a book in preparation – I think that this definitely proves that the release sent out by a government department and the claims repeated by the Chancellor for nearly a year are false

I have written to the Chancellor twice since March last year, once recorded, with no response – my local MP promised to chase twice, but so far, nothing

Many thanks

Phillip Oppenheim

 

 

Dear Sir/Madam

I have sent you an email re a claim about UK tech which I would appreciate your checking

The second issue I would appreciate your checking is below

Claim by Chancellor Jeremy Hunt: “If you look at the fundamentals of the British economy we have had our setbacks, like everyone else, but we are the fastest-growing large European country, not just since the pandemic but since Brexit and since 2010”

The Times, Jeremy Hunt: ‘Britain doesn’t need to be famous for pessimism’,September 19th, 2023, repeated frequently since

– Taking 2010 as the base, the year the Conservatives came to power, the UK’s economic growth was 21.4%, marginally ahead of Germany at 20.3%, well ahead of France at 16.7% and Italy at just 4%

– But since the 2016 Brexit referendum, the UK has grown by 7.3%, less than France at 7.8% but more than Germany at 5% and Italy at 5.2%, so Hunt is not correct.

– Britain’s growth from the pandemic to Q3 2023 was just 1.4% – better than Germany’s 0.3% but worse than France at 1.7% or Italy’sand 3.4%. So again, Hunt is incorrect – Britain has not been the fastest growing large European economy since the pandemic, despite having the worst performance during the pandemic when GDP fell by 11% compared to 9% in Italy, the next worst performer. (In fact since the pandemic, Britain’s economic growth has been the second worst in the G7 group of large economies – well below Japan (2.4%), the eurozone (3%), the EU (3.4%), the G7 as a whole at (4.7%), the OECD (6%) and the United States (7.4%))

I look forward to your response

Phillip Oppenheim

 

 

Related Links:

Ed Humpherson to Phillip Oppenheim: The UK’s standing in the tech and life sectors. (April 2024)

Ed Humpherson to Veronica Hawking: NHS waiting lists in England

Updated on 25 January 2024 to correct factual error regarding who publishes official statistics on NHS waiting lists in England: incorrectly listed as DHSC, changed to NHS England

Dear Veronica,

NHS waiting lists in England

Thank you for your letter regarding comments made by the Department for Health and Social Care (DHSC) on NHS waiting lists in England. I am replying as the Head of the Office for Statistics Regulation, on behalf of Sir Robert Chote.

As you note, in an article in the Times on 4 January 2024, a DHSC spokesperson said: “Cutting waiting lists is one of the government’s top five priorities, and despite the impact of strikes, we have cut the total waiting list and the number of individual patients waiting for treatment.”

NHS England publishes monthly official statistics on NHS waiting lists in England. This includes data on ‘incomplete pathways’, which are waiting times for patients still waiting to start treatment at the end of the month, and the number of unique patients on NHS waiting lists.

At the time of speaking, the available official statistics on NHS waiting lists in England showed the total number of incomplete pathways and unique patients on NHS waiting lists fell between September and October 2023. The latest data published on 11 January show that the total number of incomplete pathways and unique patients on NHS waiting lists also fell in November 2023.

It would better support understanding if DHSC had been clear on what time period it was referring to avoid potentially misleading people. This is particularly important where the fuller time series of waiting lists data, as shown in the chart below, show an upward trend in waiting lists over the last year and could therefore be perceived as selectivity of data.

treatments-waiting-times

Consultant-led Referral to Treatment Waiting Times Data 2023-24, November 2023

The chart above shows the estimated number of incomplete pathways and the estimated number of unique patients waiting for consultant-led treatment with the NHS in England, from November 2022 to November 2023. The lines in the chart show a steady increase in both from November 2022 until reaching a peak in September 2023, and then falling.

Those speaking on behalf of Government should make it easy for people to understand and verify what data it uses to inform the public. Taking a transparent approach by default ensures confidence in public messaging is not undermined.

Yours sincerely,

Ed Humpherson

Veronica Hawking to Sir Robert Chote: NHS waiting lists in England

Dear Sir Robert,

I am writing on behalf of 38 Degrees and our supporters to raise concerns about a potential misrepresentation of NHS waiting list data by the Department of Health and Social Care.

In a statement in today’s Times, a spokesperson for the Department of Health and Social Care said “cutting waiting lists is one of the government’s top five priorities, and despite the impact of strikes, we have cut the total waiting list and the number of individual patients waiting for treatment.” [1]

This statement is in response to an open letter published a year to the day after the Prime Minister made getting waiting times down one of his five priorities. At the time when the Prime Minister made that promise, 6.06 million patients were waiting for 7.20 million procedures. [2] In October 2023 – the latest data available – 6.44 million patients were waiting for 7.71 million procedures. [3]

We believe this spokesperson may be referring to the fact that between September and October 2023, the number of patients fell from 6.50 million to 6.44 million and the number of procedures fell from 7.77 million to 7.71 million. [4] However, we believe it is misrepresentative to claim that the government has cut waiting lists, especially considering that these figures are still higher than they were when the Prime Minister made his promise that they would fall, on 4 January 2023.

We raise this matter in light of other concerns raised regarding the accuracy of government statements, including the claim that ministers have “cleared” the asylum backlog, which you are currently investigating. [5]

We therefore ask that you investigate the Department of Health and Social Care’s statement to The Times and to offer your guidance on whether they are misrepresentative .

We look forward to your response on this matter.

Yours sincerely,

 

Veronica Hawking,

 

NOTES

[1] The Times, Thursday 4, January 2024, Page 6, “Celebrities urge PM to cut wait lists”
[2] NHS England: RTT Overview Timeseries Including Estimates for Missing Trusts Oct23
[3] Ibid
[4] Ibid
[5] The Standard: Statistics watchdog to scrutinise Sunak’s claim the asylum backlog has been
‘cleared’, Wednesday 3, January 2024

 

Ed Humpherson to Tony Dent: Household Costs Indices

Dear Tony,

Household Costs Indices

Thank you for your letter dated 14th December 2023, relating to the publication of the Household Costs Indices (HCIs), and more widely relating to the UK’s suite of inflation statistics. I hope you and your colleagues had a good festive break and I wish you all a Happy New Year for 2024.

It is good to hear of strong engagement with these statistics and that the new HCIs received a warm welcome at the Royal Statistical Society/Resolution Foundation event on 12th December.

You highlighted the demand for HCI statistics and asked when National Statistics status may be considered. ONS faces competing priorities and user need following delivering the first quarterly release of these statistics. This includes whether the new HCIs should be published monthly, the timing of the publication and when the statistics should be put forward for formal accreditation (previously referred to as National Statistics badging). My team is having ongoing discussions with the ONS Prices team around these developments, the competing priorities and suitable timing for ONS to seek accreditation for these statistics. We will ensure that we seek your views once we start the assessment to consider accreditation of the HCIs.

In response to your concerns that CPIH is compromised by inadequacies, we would note that there is ongoing debate about the optimal methodology for capturing housing costs, and ongoing work within the ONS to improve the data that is received to capture these costs as fully as possible, both for rents and for owner-occupied housing. From a regulatory perspective, we are aware that the ONS is keen to seek accredited official statistics status for the new private rental series and, by default, the continued accredited statistics status of CPIH. With this in mind, alongside the ongoing debate about how best to capture housing costs, we plan to commence an assessment of the new Price Index of Private Rents and the owner-occupiers housing element of the CPIH in the spring. We will seek your views as part of this assessment.

We are pleased to see the improvements that ONS is making to these statistics and believe that ONS was transparent in publishing the impact analysis of including the new private rental series around the limitations of the previous data and methods, which were the best available data at the time. The latest rents data are more comprehensive and detailed than the historic data and that, combined with methodology changes, is enabling more robust estimates to be produced around rents.

Thank you again for your correspondence. We look forward to further discussions around UK official statistics in 2024, both relating to inflation and more generally.

Yours sincerely

Ed Humpherson

Ed Humpherson to Rob England: Withdrawn Asylum Applications 

Dear Mr England 

Withdrawn Asylum Applications

Thank you for contacting us. We have looked into your concern.   

The Home Office has confirmed that the total number of withdrawn asylum cases is correct but recognises that there may be an error where withdrawn asylum applications may not have been correctly classified into the two potential sub-classifications of withdrawal. The Home Office has included a note about this alongside the statistics while it continues to investigate the extent of the impact. While we recognise that this potential data quality issue may be disappointing to some users, and the Home Office could have made this issue more prominent for users, we are satisfied that the Home Office has plans in place to ensure this is resolved for future publications.   

We would also like to clarify the definitions of the two types of withdrawn asylum applications. An application is described as a ‘non-substantiated withdrawal’ if the applicant fails to cooperate with the process to examine and decide the asylum claim within a reasonable time period. This is the group referred to in the BBC article you shared with us and discussed at the Home Affairs Select Committee, and which has been described as ‘missing’. By contrast, where the applicant actively chooses to withdraw the application, this is classified as ‘other withdrawals’.  

The 17,316 figure is the total of both types of withdrawn asylum applications. This figure therefore includes those classified as other withdrawals. As these applications are withdrawn by the applicant, it would be incorrect to describe these applicants as missing because once their application has been withdrawn, they are expected to either leave the country or have an alternative agreement to stay. The error within the data, therefore, means that it is not at present publicly reported how many applicants would fall into the category described as ‘missing’ – that is, the number of applications that are classified as non-substantiated withdrawals.  

We will continue to engage with the Home Office to ensure this issue is resolved as soon as possible. 

Yours sincerely   

Ed Humpherson
Director General for Regulation 

Tony Dent to Ed Humpherson: Household Costs Indices

Dear Ed,

Following the publication of the quarterly HCI’s on 5th December, I attended Tuesday’s informative discussion on those measures, as hosted by the Royal Statistical Society in collaboration with the Resolution Foundation; where the event was held. Almost universally the HCI’s were welcomed by the various speakers, including Paul Lewis, the journalist and broadcaster and Morgan Wild of Citizen’s Advice.

There is an evident demand for these statistics to be published monthly alongside the CPI, rather than quarterly as currently proposed. There is also interest in when they may be considered for National Statistics status. Although such issues are not under your direct control, Better Statistics are interested to understand if there are any discussions in progress as to when such status might be considered? In our opinion the HCI’s are a clear example of the better statistics we seek!

The position is somewhat compromised by the evident inadequacies of the CPIH, which is presently subject to considerable adjustment to correct for inaccurate rents data. This latter has served to re-ignite the controversy created by selecting ‘rental equivalence’ to measure inflation as experienced by owner occupiers. As evidenced by the recent errors, replacing the robust calculation of the costs actually experienced by owner occupiers with the less than robust rental costs of a fabricated ‘measure’ is not appropriate. I am sure you are aware that the recent amendments to CPIH have created widespread comment, for example, the Twitter handle @TUCeconomics recently posted “ONS ‘what if’ calculations suggest CPIH might have been 0.5 percentage points higher in recent months.” Further comments on this matter were recently posted by Shaun Richards on the Stats User Net – please see The ONS View on owner-occupied housing is in disarray.

We therefore echo Shaun Richards’ comment at the end of his post that, surely, the National Statistics status must now be removed from the CPIH? Do you have plans to reconsider that status?

In closing, I wish you and all at the OSR a Merry Christmas and Happy New Year. Whatever may happen as a result of the UKSA review, I hope we might have the opportunity for further discussion on the Code of Practice in the New Year. That is the key to ensuring badly needed improvement across all of our statistics.

With all good wishes,

 

Tony Dent,

 

Related Links:

Ed Humpherson to Tony Dent: Household Costs Indices (January 2024)

Maya Forstater to Ed Humpherson: ONS research into 2021 England and Wales Census data on Gender Identity

Dear Mr Humpherson

The ONS has released the final report on its investigation of the quality of the 2021 census’s gender-identity data together with a statement saying it has “confidence in the gender identity estimates at national level”. The report offers the following conclusions in support of this.

  • On collecting and processing the data: “This strand of our research … provided no evidence that the design of the question or the statistical processing of the collected data had an adverse effect on the quality of the published statistics”.
  • On respondent error and misunderstanding: “There are patterns in the data that are consistent with some respondents not interpreting the question as we had intended… However, for the reasons outlined in this report, we cannot say whether the census estimates are likely to be an overestimate or an underestimate of the true value, given other sources of uncertainty, not least the potential impact of question non-response. Therefore, the overall impact on the data of any misinterpretation of the question cannot be determined.”

We find that the ONS’s investigation has been inadequate and its conclusions are not supported for five reasons.

  1. The investigation does not address the core issue concerning consistency or not between an individual’s answers to the sex question and gender-identity question.
  2. Implausible explanations are accepted ahead of more likely interpretations of misunderstanding for answering NO to the question about whether they match.
  3. Given the small numbers of transgender people and the likely level of misunderstanding the statement that the ONS has “confidence in the gender identity estimates at national level” is not supported.
  4. The classification of people by the terms “trans man” and “trans woman” by the ONS is not in line with the Census (England and Wales) Order 2020.
  5. The design of the question relies on the whole population making a declaration based on the idea of gender identity in order to estimate a minority that identify as transgender.

Furthermore the focus in the ONS statement on the numbers being plausible at the national level implies (but does not make explicit) that it is not confident they are plausible at the subnational level.

Attached to this letter are further details on each of these concerns.

Our conclusion is that data on gender identity (including the sex of the people identified as transgender) is not fit for purpose. It was driven by the adoption of concepts and questions promoted by lobby groups that seek to replace sex with gender identity.

The lack of clarity and certainty about the meaning and interpretation of the sex question and the gender identity question also has knock-on effects for interpreting how the sexual orientation question, the sex and the gender identity question interact.

This investigation by the ONS is inadequate and undermines confidence in national statistics. It lays the groundwork for further erosion of clarity on sex, and the wider adoption of a gender identity question, and use of associated data, that has been demonstrated to be unreliable.

The ONS has proved itself unwilling to accept clear indications that the gender-identity question produced unreliable answers, and has adopted a “self-identified” approach to the definition of sex even after the High Court judgment in 2021. We therefore have no confidence that the ONS will adequately address the need for clarity and accuracy about sex and transgender identity in its development of the harmonised questions on sex and gender identity.

We call on the Office for Statistics Regulation to take regulatory action in order to secure public confidence in national statistics, and to prevent the faulty question being replicated.

  • The question should be officially retired, with an apology and an explanation to discourage others from using the same wording.
  • A warning should be put on the data and it should not be designated as national statistics. The OSR must determine whether the national headline figure and the sub-national figures should have national statistics designation.
  • The ONS definition of “sex” for the 2021 census should be corrected to reflect the guidance given to respondents following the FPFW challenge, and the question of whether actual sex should be routinely and clearly collected for population demographics should be reviewed.

The ONS should also:

  • publish the free text answers broken down by sex
  • publish details on the number of people in the telephone survey who did not confirm their answers to the gender-identity question
  • analyse the pattern of households which they classify as having more than one trans member (which may indicate a misunderstanding of the question)
  • investigate whether it is possible to cross-check sex and self-identified gender for this section of the population with other data sources such as administrative data from the NHS and the birth and gender reassignment registers.

We would like to know whether the response from the ONS was seen or agreed by the OSR before the ONS sent it and whether it has been discussed by the UKSA board. Is the ONS report the view of the National Statistician? Has it been seen by the Methodological Assurance Review Panel or any other methodological group?

We are publishing this letter and we request that it and any response are also published on the UKSA website and linked to in a way that makes it findable to users of the gender-identity data.

Yours sincerely

Maya Forstater
Executive Director

Helen Joyce
Director of Advocacy

Michael Biggs
Advisory group member

 

Related Link:

Ed Humpherson to Maya Forstater: ONS research into 2021 England and Wales Census data on Gender Identity