At the Office for Statistics Regulation it is our job to make sure that statistics serve the public good. Part of our role in doing this is to combat the misleading use of statistics.

It is our role to make interventions where we see misleadingness happen, and remind those who misuse statistics that to uphold public trust, data should be used correctly and with complete transparency.

But it can be difficult to know whether incidents where misleading or incorrect figures are used are deliberate or not. And it can be even more difficult to know what action is appropriate for us to take.

How can we prove that misleadingness has happened? If we can prove it, how do we measure severity and intervene? Last year we decided to explore these questions more thoroughly. We worked with a philosopher, Jenny Saul from the University of Sheffield, who has written about misleadingness, and tried out various approaches and ways of thinking.

The thinkpiece published today is the fruit of those discussions. It’s deliberately exploratory: we present it as a series of alternative approaches, and we don’t pretend it’s definitive or the last word on the subject. It briefly explores three methods for tackling misleadingness and discusses how each one has its own unique benefits.

We would very much like to hear people’s views on these approaches.

And we are very grateful to Jenny Saul for the time she spent with us discussing these questions, and for the foreword she has written.