Penny Babb, head of policy and standards in the Office for Statistics Regulation, busts a few myths about experimental statistics.
I reckon that experimental statistics are probably one of the least understood aspects of official statistics among producers. But, you may say, they’ve been around for nearly 20 years! So perhaps familiarity breeds contempt? There may be something in that. But in this blog, I’d like to correct a few myths about experimental statistics and leave you with the sense that these are cutting edge statistics – they are at the heart of innovation in official statistics.
“ES is a mark of poor quality” – WRONG!
‘Experimental statistics’ means that the statistics are going through development and evaluation.
Going through a process of development will certainly mean that there is a greater potential degree for uncertainty in the estimates.
But done correctly (that is, explaining clearly about the nature of any important limitations when publishing the experimental statistics) should mean that they are useful to users. If they aren’t, perhaps you shouldn’t be publishing them.
“You can just tell users about what you’ve done after you done it” – oh no you can’t!
The best way to test new methods or sources of data is to do so openly and in collaboration with users and other producers.
You build confidence in the statistics by being open and clear about the developments. That involves setting out the nature of both the development and evaluation.
Share your plans. Be clear about when users can be involved. Learn from their expertise and experiences. Understand what questions they want the statistics to answer, and how they use the data and statistics to answer those questions.
Feed their ideas and experiences back into your development. Your statistics will be the better for it and you will have earned credibility with those who know your statistics best – your users.
“The ES label can be used as long as you like” – absolute no, no!
The label of experimental statistics should only be used while the development and evaluation are happening.
If the same method and data are being routinely used to produce some supposed experimental statistics but there is no actual development, and instead the label is being used to indicate that the method or the data are a bit rough, then stop using the experimental statistics label.
Instead, you should be labelling the statistics as official statistics and be making clear the nature of the strengths and limitations – if there is a quality issue that users need to know about, then tell them. But don’t do it by labelling your output as experimental statistics.
“ES is only for official statistics and isn’t relevant to National Statistics” – oh yes, it is!
It is imperative that National Statistics continue to reflect the aspect of society that they represent – without these kinds of developments, there is a risk that statistics do not remain relevant. We had a good reminder of the dangers of that in the Bean review of economic statistics.
And here’s a great current example. MHCLG’s team producing land use change statistics is undertaking a development that draws on cutting-edge technology – earth observations from its partners at Ordnance Survey. The team has developed a good understanding of the insights that users need around land use and the change in use. So, when the statisticians identified a new source of data, they jumped at the chance to improve their National Statistics. Check out the work of Sophie Ferguson and her team on the GOV.UK website.
“Publishing an experimental official statistic alongside our consultation gave us an opportunity to provide proof of concept. The data had not been published before and as an experimental official statistic we had a clean slate to demonstrate how, using the latest technology, we might enable the user to control how they interact with the data, and get better insights more directly.”
Sophie Ferguson, lead statistician, Land Use Change Statistics, MHCLG
Please write to me at regulation@statistics.gov.uk if you have any comments, queries or good examples about experimental statistics – or if you have other myths that you would like busted!
It will be great to hear from you.
Related links: