QAAD Questions – what do I need to ask?
What, how why...
Use this guide as a prompt to help you find out more about your admin data sources and any associated quality issues. Be proportionate in the scale of your investigation, based on the importance of the statistics.
But remember – this is not a checklist.
Data quality concerns
How are the data collected? What is the data collection and supply process?
Are there different data collectors and data supplier bodies? Is there more than one supplier?
Do suppliers use the same collection and coding systems?
Is there a formal agreement in place? Do we pay for the data? If we receive data under contract, is data quality specifically determined in the contract to ensure suppliers maintain quality during and between contracts?
Are data suppliers required to report quality indicators to us? Do we receive the reports?
What do they show?
Have there been policy changes that have changed the data collection requirements?
Historically, how many errors, tardiness, incomplete or resubmission of data have we had?
Do particular suppliers have poor track records for completeness or timeliness of supply?
What are the unmitigated risks to the quality of the data supplied? Can we put any more processes in place to mitigate the risks?
Have the data collection/processing systems changed or are changes in planned?
Public interest profile
What use is made of the statistics?
Who uses the statistics?
How broad is the user base?
What decisions do the statistics impact?
If these statistics were no longer produced what would the impact be?
Are the statistics used to hold the government or public bodies (e.g. NHS) to account?
What is the reputational risk attached to these statistics? Does this change over time?
Is there an alternative source?
Is there political interest connected to these statistics (e.g. manifesto pledges, Government commitments etc)?
Are there any laws or regulations mandating the supply of data?
Telling users about quality issues
Do our users know why and how the data are collected?
What is the best way to communicate the data supply process to our users?
Have we explained how we mitigate risks or our plans to mitigate risks in future?
Have we set out clearly the steps taken to ensure data quality throughout the collection and supply processes?
Have we been clear enough about any past changes to the data collection process and implications for the limitations of the statistics?
Have we explained the risks to data quality and the impact on the statistics to our users?
What do users need to know about the quality assurance arrangements to inform their use?
Do people misuse my statistics? What can I do to minimise this?
Operational context and admin data collection
What data are collected?
How are they collected?
Why are the data collected in this way? Is this the best way?
Is the coverage sufficient to minimise bias and error?
What is the implication of the collection process for data quality?
Am I aware of any third-party data quality assurance or audit functions that occur during the data collection process?
Does the presentation of performance measures in statistical reports provide perverse incentives?
How confident do I feel about the comprehensiveness of my knowledge about the operational context and data collection process?
Communication with data supply partners
Am I happy with my engagement and working relationship with my data suppliers?
Do I understand the processes undertaken by my suppliers?
Do I get enough information from them to help me get the data I need?
Am I happy with the consistency and data definitions?
What happens if things go wrong?
How will I know if my data suppliers change their systems or processes? Are there any implications of the changes for the statistics?
Do my suppliers have a good understanding about the way I am using their data? Am I up to speed with any relevant legislative data sharing or data protection arrangements and are any necessary agreements in place? Are my suppliers?
How confident do I feel about the effectiveness of the way I communicate with my data suppliers?
QA principles, standards and checks applied by data supply partners
Do I understand what quality assurance arrangements my suppliers have in place?
How do I know that the data suppliers are taking the collection of their data seriously and ensuring its quality?
How do I know that the data suppliers are applying routinely any quality assurance that is set out in the service level agreement?
If I have concerns, can I influence the data suppliers to strengthen their QA?
Do the data suppliers sense-check their data and what can I learn from that?
What are the tolerance levels for errors and outliers?
If the data look strange, but the suppliers say the data were QA’d, do I feel confident in these results?
What is my appetite for risk on this? How much would an error impact my results? Do I think the assurance arrangements are sufficient for my need and use?
What are the implications for the quality of the statistics stemming from my findings about the data suppliers’ QA arrangements?
Producers' QA investigations and documentation
Have I sense checked the data against other sources and investigated (and can explain) any difference found?
Am I checking the right things at the right point in the process?
Do all the QA steps still add value?
Are there things I can be doing outside the release round, to reduce the QA and preempt any issues?
If I was off for a month, could I be confident that someone in my team could pick up the QA processes and sign them off?
Do I know how users use the data and so what they need to know about QA?
If the BBC asked me why my data have changed, can I explain it?
What do our international counterparts do to QA similar data?
What are the implications for the quality of the statistics stemming from my QA findings?
Back to top