2 Levels of Public Trust
As Citrin and Stoker reflect, a lot is already known about the differences in trust levels across time, place, individual and context, and much work has been dedicated towards explaining these differences (2018, p.56). This review compiles these explanations.
This section considers specific actors and objects which are relevant to official statistics. Taking this holistic approach is advantageous. Firstly, it mitigates against the sparsity of research dedicated to trust in official statistics. Secondly, it provides a broader picture of societal levels of trust, as well as insights into how this varies across the relevant domains.
In terms of structure, this section is organised to enable the actor and the object to be considered in turn. This applies the analytical distinction endorsed by the Fellegi model (2010). This shows that the qualities of trust in statistical institutions (protecting confidentiality, integrity, openness, impartiality and effective stakeholder management) and trust in statistical products (accuracy, timeliness, reliability, credibility, objectivity, relevance and coherence) are different, and how both are needed to build trust in official statistics.
Alongside this, this section also adheres to Achterberg et al.’s (2017) finding that it is important to distinguish between scientific evidence (the object) and the scientist (the agent or actor). This is because the public may trust the evidence the object generates, yet they may not necessarily trust the individual actor carrying out the study or communicating its findings to them. This reflects a shift towards anti-elite/anti-expert sentiments that have become a feature of a globalised, poly-crisis society (Mede & Schäfer, 2020).
With this in mind, it is important to be cautious to avoid conflating low levels of trust in actors with low levels of trust in the evidence, or the information and knowledge it generates.
2.1 Trust in Relevant Actors
The UK public’s trust in government ministers reached an all-time low in 2023 (Ipsos, 2023) and has consistently remained lower than trust in experts (Department for Science, Innovation and Technology, 2024). Recent studies show that trust in the media is low (Organisation for Economic Co-operation and Development, OECD, 2024b), and journalists remain one of the least trusted professions (Ipsos, 2024).
This depiction of waning trust is part of a broader pattern referred to as a “crisis of expertise” (Eyal, 2019). This pessimistic picture has shaped the focus of the trust literature, with some scholars (such as Wiesehomeier & Ruth-Lovell, 2024) examining the paradox of trust in “the people” (low moralistic trust in the subject at an individual level and high ascribed trust in the abstract object) alongside others who, highlighting the performance–trust nexus, have focused on the parallels between low trust and low-quality governments (Keefer, 2021).
However, the most recent round of surveys suggests that there may be a slight increase in reported levels of trust for ministers, politicians, civil servants, journalists and scientists (Ipsos, 2024), signifying a possible improvement in the trust climate in the last 12 months. Of course, the data to be able to understand whether this increase forms the start of an upward trend, or whether these observations are actually a momentary deviation from the downward trajectory witnessed over the last decade or so, are not yet available. Keeping abreast of future trends relating to the actors highlighted in this review will provide helpful answers to this question.
With this broader picture in mind, this review takes a segmented approach to delivering evidence on public trust levels. Taking the relevant actors in turn, it outlines how they are related to official statistics, specifying either the production or communication function they provide. Thereafter, evidence, data and studies, along with key patterns and trends, are highlighted.
The value of this structure is that it recognises that standards of trust, or more accurately how the public expect trustworthiness to be exemplified, differ depending on the person in question. To cite Seyd, Jennings and Hamm (2022), for a politician, ‘trust is based on their level of care and concern for ordinary people, and for their honesty and fidelity to promises.’ Where scientists are concerned, expertise and experience are treated as suitable metrics, and trust is awarded to those with the ‘technical knowledge and capabilities’. This highlights the importance of not conflating standards, or measures, of trust from one profession to another. In accordance with this, the government, civil servants and other public bodies, scientists and experts, and finally journalists and the media are each explored individually in the following subsections.
2.1.1 Trust in Government
Understanding trust in the government is important. This is because government ministers often cite official statistics in their communications with the public, especially when introducing new policies, or evaluating the progress of existing policies. This role of government minsters as communicators of official statistics means that understanding trust in the government that they are a part of is crucial.
Previous OECD studies (2024a) show that levels of trust in the UK Government (27%) are below the average reported levels of trust (39%) within the OECD. Alongside this, Ipsos findings indicate consistently low levels of trust in politicians, with levels of self-reported trust never rising above 23% for politicians and 25% for government ministers since the Veracity Index began in 1983 (Ipsos, 2024).
Studies also point to low levels of popularity. For instance, the National Centre for Social Research (NatCen) points to high levels of vocal critique, with the British Social Attitudes survey reporting that ‘the public are as critical now of how Britain is governed as they have ever been’ (NatCen, 2024).
This situation of low levels of government popularity, and high levels of critique directed towards the government, reflects what Sztompka (1999) has termed a ‘culture of distrust’ (as cited in van de Walle & Bouchaert, 2003). Explaining how this ‘culture of distrust’ is established, van de Walle and Bouchaert propose that it is reminiscent of the spiral of silence hypothesis (initially proposed by Noelle-Neumann in 1974 and reviewed in a meta-analysis by Glynn et al., 1997). This, they explain, is because when considering whether to vocalise trust, there is a ‘type of social pressure to comply with this [negative] attitude’, and this is driven through fear of isolation (if divergent opinions are exposed) (van de Walle & Bouchaert, 2003, p.905). As a result, they argue that ‘[a]s long as the people think most people have a negative perception of government, they will express a negative perception themselves, even if this perception does not correspond to reality’ (van de Walle & Bouchaert, 2003, p.905).
Confounding this, van de Walle and Bouchaert claim that negative attitudes ‘seem to support themselves [and] examples of good performance are just not noticed anymore’ (2003, p.906). This challenges the performance–trust hypothesis, which proposes that performance (meeting policy goals, maintaining promises, etc.) is sufficient to build trust. On the contrary, it suggests that (expressions of) distrusting attitudes are influenced by public perceptions, and that distrusting attitudes are socially reinforced.
This points to the importance of securing and maintaining a positive public image in order to avoid being engulfed within the ‘culture of distrust’ and becoming the object which ‘the people think most people have a negative perception of’ (van de Walle & Bouchaert, 2003, p.905). One way to achieve this in the sphere of official statistics may be by encouraging public figures to express trust in said statistics, specifically in instances where this trust is earned.
Honesty is an important aspect of trust (as evidenced in survey responses). This theme is covered by Ipsos, which, in 2024, reported a slight increase of 5pp. in trust in government ministers to tell the truth, and a 2pp. increase for politicians. However, it remains the case that both received a negative net trust calculation (-65% for government minsters and -74% for politicians), and they rank penultimate and last in terms of trust (15% and 11%, respectively). In addition, across all OECD countries, only 41% of respondents reported that they think the government uses the best evidence in decision making; this drops to 37% for the UK (Figure 5.13, UK referred to as GBR, OECD, 2024b).
Studies have evaluated this phenomenon of low trust and perceptions of dishonesty, and explanations have been proposed. A survey conducted by the OECD (2024a) explored patterns of trust and distrust in the government. It highlighted demographic variations (with women and younger people associated with lower levels of trust) alongside socioeconomic explanations (reflecting the finding that those experiencing economic hardship reported lower levels of trust).
The same survey also revealed that the UK is one of a very small number of countries where higher education is not associated with higher levels of reported political trust (lower levels of education reported: 30%, compared to those with higher levels of education: 23%) (OECD, 2024b). This finding that higher education was not necessarily associated with higher political trust suggests that, whilst understanding patterns and variations may be useful to help tailor communication efforts, education is not a universal remedy for low levels of trust (contradicting the knowledge deficit model).
Concentrating on the finding that trust is lower among the younger demographic, models of political socialisation can be helpfully applied to provide a picture of whether it is context (environmental stimuli) or life stage (nature) which contributes to lower levels of trust. Strengthened by the observation that the previously observed pattern of trust (where older people reported lower levels) has changed, this finding indicates that the context of socialisation (the generational model, as outlined by Blais et al., 2012) has a greater impact on one’s trust levels than their life stage. In other words, it suggests that trust levels are low among the younger generation because of the political conditions in which they have been socialised: contemporary society appears to be negatively associated with trust.
Interestingly, it seems that this bleak picture of low levels of societal trust has not necessarily translated fully into politicians’ own calculations of others’ levels of trust in them. Weinberg (2023) has investigated this discrepancy using comparable questions focused on trust, and perceived trust. In pointing to a ‘trust gap’, he reports that, on average, perceptions of trust were markedly higher than actual trust, and perceptions of distrust were considerably lower.
This reiterates the importance of directly monitoring public levels of trust via public engagement and/or social research. If this route is neglected, and we are left to depend upon politicians’ own calculations of themselves as being trusted, as Weinberg shows, this is unlikely to be accurate.
Monitoring public levels of trust closely will allow government ministers to receive responsive feedback specifically on how their actions and behaviours are interpreted by the public (trustworthy, or suspicious/harmful to trust). ONS surveys (ONS 2024) indicate that integrity is positively associated with trust in the government, and previous OECD studies (2024a) signal that reliability, openness and fairness are desirable traits. The same study also pointed to specific drivers of trust, finding that following the same rules (63%), competence (50%) and engaging citizens (49%) were considered important by members of the public (ONS 2024).
However, the granular insight, which may explain why government ministers’ efforts to exemplify trustworthiness (via specific behaviours) are being lost in translation, is missing.
Establishing a feedback mechanism of this nature could provide an invaluable tool for governmental officials to learn about how they are perceived, and to provide actionable strategies to avoid falling into the ‘trust gap’. In the context of official statistics, this feedback tool could involve polling and gathering opinions immediately following ministerial communication of an official statistic. Focus groups or other research methods could be implemented to explore communicative interaction in more depth.
2.1.2 Trust in Government Departments, Civil Servants and Public Bodies
Trust in the civil service and wider government bodies is important as official statistics are produced by civil servants based in government departments. By virtue of their role, for producers of official statistics, understanding public levels of trust in the wider governmental apparatus is critical.
A study carried out by the OECD (2024a) shows that civil servants are the most trusted part of government (45% compared to 27% for the UK Government and 12% for political parties). This finding was reiterated by ONS (2024), who reported higher levels of public trust in non-political arms of government – such as the Civil Service – compared to political parties, parliament and devolved governments.
Echoing these relatively optimistic reported trust levels, the Veracity Index (Ipsos, 2024) reports a net positive rating (+21%) showing that more people trust civil servants to tell the truth than expect them to lie. That being said, although civil servants’ overall trust rating (56%) has increased by 31pp. since the index’s inception in 1983, this is a considerable fall from the peak of 65% in 2019 (Ipsos, 2024). This recent decline shows that while trust has remained positive, civil servants have not been isolated from the wider trends of falling trust in recent years.
Amidst this broader trend of waning trust, attitudes towards governmental bodies differ, and trust may not be universally applied to different actors or sectors. Understanding this point is important as it relates to how trust can be shared and borrowed across networks (such as the government apparatus) and suggests that bandwagoning is not a foolproof option.
To illustrate this point, data from the Public Attitudes to Data and AI Tracker Survey (2024) are used. This survey is specifically designed to explore public attitudes towards data uses, including data sharing, and the risks and opportunities associated with artificial intelligence (AI). The survey is, however, included in this review as its core finding, that trust is driven by the organisation involved, is particularly relevant to illustrate differing levels of trust across the arms of government. This conclusion was reached through a conjoint-choice-based experiment, where participants were presented with two options and asked to indicate their preference. Across both identifiable and anonymised data, the results of an attribute analysis showed that the organisations involved in the data transfer mattered more to individuals than either what the data were going to be used for or the governance structures that were in place.
This finding emphasises the importance of a positive organisational or departmental reputation. This suggests that even if the use case is valuable for society, or of personal importance to the individual, if the public do not trust the organisation, then efforts to highlight the purposes of data collection and any reassurances the organisation provides may be insufficient.
The literature dedicated to the question of trust in civil servants is relatively marginal compared to that focused on trust in government and/or political actors. That being said, studies suggest that trust is higher when government performance is positive. These studies are based on the notion that ‘bad performance of government actors and agencies would create negative attitudes towards government in general’ (Van de Walle & Bouchaert, 2003, p.893, as cited in Morelock, 2021, p.319). This is in line with the performance–trust hypothesis (Yang & Holzer, 2006) and proposes that departmental trust is evaluated against performance criteria.
Challenging the importance of performance, van de Walle and Bouchaert (2003) propose that the performance–trust hypothesis may be more compelling when negative attitudes are presented as the independent variable (rather than government performance). To elaborate, ‘the existence of a generalised negative attitude’ (such as the ‘culture of distrust’, as discussed in the previous section, Trust in Government,) creates a situation whereby the actions of government will be evaluated in a negative way ‘just because they are government actions’ (van de Walle & Bouchaert, 2003, p.902). This flips the mechanism on its head and points to the possibility of reverse causality. In line with this account, trusting attitudes are situated as the starting point (independent variable), and evaluations of government performance (including government ministers, officials and civil servants) are portrayed as the outcome (dependant variable).
Also taking an alternative outlook to studies which have prescribed improved performance as a route to establish trust, another branch of the literature focuses on ‘the processes used by government, rather than outright results’ (Morelock, 2021, p.316). To quote Van Ryzin (2011), ‘public perceptions of the trustworthiness of civil servants depend not just on the extent to which government succeeds at delivering outcomes to citizens – but on getting the process right by treating people fairly, avoiding favouritism and containing corruption’ (p.755, as cited in Morelock, 2021, p.319). Houston et al. (2016) concur: ‘citizens do not only expect competence, administration must also be characterised by ethical behaviour’ (p.1211, as cited in Morelock, 2021, p.319–320). These contributions suggest that it is not enough to deliver positive or quality outcomes, and that the public expect more than performance when asking the question “In bureaucrats we trust?” In other words, performance and quality are necessary, but they are not sufficient to build trust in civil servants.
In accompaniment to positive performance and a reliable track record, civil servants must ensure they are acting impartially, behaving with integrity and holding bad behaviour (not just bad performance) to account.
Further evidence to support the recommendation of good governance, rather than simply effective governance, is provided by Morelock (2021). His study involved performing a multilevel logistic regression, using data from 23 OECD countries. The UK was not included in the sample due to insufficient data; however, as the UK fits the selection criteria of ‘advanced industrial economies’ (p.320) and the cases included do vary in their structural and institutional characteristics, the results may still be helpful in the UK context. To summarise, Morelock finds that trust in civil servants was higher among individuals with higher political efficacy (the knowledge to participate in politics and the belief that their participation has an impact) and individuals with perceptions of lower government corruption.
This suggests that emphasising accountability structures, behaving in a transparent manner and communicating these responsibilities to the public can help build trust in the departmental body in question.
One final theme raised in the literature relating to public trust in government bodies is public engagement. Public engagement is a much richer process than gathering feedback; it implies a two-way process where parties exchange knowledge, experience and views in an effort to come to a shared understanding. Petts (2008) critiques the transactional account of how public engagement builds trust, making the case that to engage in public engagement under the false pretence that simply gathering views ‘will result in enhanced trust’ [emphasis in original] is an error (p.822).
This premise points to an important takeaway: the conditions of public engagement matter. Performative, superficial and empty efforts to engage with public and stakeholder views are unlikely to increase trust simply by virtue of the process having taken place.
As Petts (2008, p.832) explains, the International Framework for Risk Governance (IGRC, 2006) provides some helpful guidance based around the criteria of representation, collaboration and decision impact. Specifically, it advocates for wide representation, referring to both invited participants and contributors within government. Based on her own extensive experience of the deliberative process, Petts recommends that an array of experts which bring differing opinions are invited and that time is taken to translate complex technical jargon into accessible language. Next, developing a shared construction of the problem is recommended in order to counteract the concern that engagement is superficial. However, this is a balancing act within the constraints of what is legislatively possible, as well as a desire to not leave the process completely directionless, which may in turn decrease trust as it is seen to denote a lack of competence.
The final recommendation focuses on the importance of the decision being reflective of the discussions held. This is not to command that the outcome translates directly into practice without reflecting other considerations; rather, it is to warn that if outcomes are not witnessed, and no explanation is given for why this is the case, the public may start to question the value in participating. In this case, any interpersonal trust built during the deliberative process may remain context-specific and not translate to future activities, nor the wider institution, organisation or department.
These principles (representation, collaboration and decision impact) are included to provide some guidance of meaningful public engagement, and to demonstrate how, if conducted in a superficial manner, it may have a net negative effect on trust.
2.1.3 Trust in Experts and Scientists
Scientists mirror the role of official statistics producers. Both produce evidence which is used to inform individual, and wider societal, decisions. However, scientists are not direct producers of official statistics, and they most often operate outside of government structures. Despite this, as scientists are expert communicators of evidence, examining how they are perceived by the public may reveal interesting findings that can be applied to the official statistical sphere. Furthermore, within the literature, there has been a concern that the public perception of scientists influences their willingness to trust scientific evidence (Besley, 2015). Consequently, it is important to understand how scientists are perceived.
To clarify, this section focuses on trust levels in scientists (the agent) as opposed to trust in scientific evidence (which is reviewed in section 2.2.2, Trust in evidence). Echoing the overall structure of this review, the studies presented here subscribe to the notion that people may trust someone, not something.
According to public opinion research carried out in 2025 by the Campaign for Science and Engineering (CaSE), “scientist” was reported to be the profession that people indicated they trusted the most, with 7 in 10 people reporting that they trust scientists, and 22% saying that they trusted them completely. It is possible that these figures may be somewhat higher due to social desirability effects. Nonetheless, comparably high figures are reported elsewhere. For instance, according to the 2024 Ipsos Veracity Index, 79% of respondents indicated trust in scientists.
High levels of trust in scientists were also reported in the 2019 Public Attitudes to Science (PAS) survey, with their place of work highlighted as a factor in differing trust levels (Department for Business, Energy, Industrial Strategy, 2020). Specifically, scientists working in the university sector were the most trusted by the public (around 90%), whereas those in the private sector were trusted the least (57%). Interestingly, government-employed scientists were trusted by around 75% of people, which, though displaying a lower level of trust than that observed for university-sector scientists, is higher than that reported for civil servants generally (45%). Taken together, this suggests that sector and profession both contribute to levels of public trust.
To capitalise on the higher levels of trust in government-employed scientists compared to civil servants generally, statistics producers may benefit from emphasising their membership of the Government Statistical Service in publications, rather than departmental membership.
Moreover, it is interesting that scientists were seen as the most trusted profession (CaSE, 2025), yet when presented with pairs of opposing attributes, more people indicated that they viewed scientists as secretive (44%) than open (41%) (Department for Business, Energy, Industrial Strategy, 2020). This pairs activity did not relate specifically to trust. Nonetheless, it is interesting that 74% of people indicated that they trust scientists (Ipsos, 2023), yet (as reported in PAS, 2019), when given the binary option, more respondents selected the secretive attribute. This may imply that openness is not the main component in determining trust.
Further research suggests that openness is not necessarily conducive to improving trust levels. As Younger-Khan et al. (2024) conclude, self-disclosure – one method of signifying openness – is found to signify “warmth” and improve the perception of one’s benevolence and integrity. However, it is viewed as a determent to perceptions of competence. Alternmüller et al. (2023, as cited in Younger-Khan et al., 2024, p.3) examine the trade-off and show that a high perception of warmth (and openness) does not lead to an overall improvement in levels of trust. Applying the ABI model to this question, the finding that perceptions of warmth (which may improve perceptions of benevolence and integrity) do not necessarily increase levels of trust implies that benevolence and integrity may not be prevalent attributes when it comes to generating trust in scientific experts. Instead, one’s ability and their perceived level of competence appears to be the principal attribute.
Relating this discussion on desirable attributes to the bigger picture, it is worth noting that the prioritisation of competence diverges from Devine et al.’s (2024) finding that, for political trust, benevolence is considered to be the most essential attribute. This reiterates the importance of acknowledging different professional standards when seeking to increase levels of trust.
Hence, it is important to establish what characteristics and standards people consider to be important for official statistics producers. This would help in providing clearer guidance on what standards should be aspired to, as well as insights as to how they are received by the public. Moreover, this clearer picture may also be useful in the sense that it may safeguard against potentially mistaken efforts, which may prove counterproductive should they emphasise attributes the public do not consider to be essential, or even appropriate, for producers of official statistics.
The final observation related to scientific experts reflects the importance of value alignment. These studies suggest that trust is not necessarily determined by the specific values that a scientist (or an expert, more broadly) displays. Rather, what matters in terms of exemplifying trustworthiness is that the values of the expert and public align. In other words, do the public see themselves reflected in the expert? As Siegrist, Cvetkovich and Roth (2000) show, in cases where values align, general trust increases, whereas in cases of disparity, trust falls.
This finding that value alignment may help foster relations of trust can be helpfully applied to the official statistics sphere with the recommendation that producers align themselves with the public’s expectations around statistical production.
OSR’s public dialogue project (2022) provides evidence showing what aspects of statistical production, and dissemination, the public value. The following aspects were identified: public involvement; reflecting real-world needs; clear communication; minimising harm; and best-practice safeguarding.
Continued public dialogue in order to remain at the forefront of any value shifts, or emergent concerns relating to the use of statistics, will be necessary to ensure continued alignment.
2.1.4 Trust in Journalist and the Media
The role of the media and journalists is crucial in the communication of, and the public’s engagement with, official statistics. Although they are not involved in the production of official statistics, journalists and the media contribute to salience and coverage, thereby playing a significant role in shaping the public’s interpretation of official statistics. This highlights the role of the news as an intermediary in terms of shaping how official statistics are communicated to the public. Alongside this, it also alludes to the possibility that media coverage has broader ramifications.
The 2023 OECD Survey on Drivers of Trust in Public Institutions (OECD 2024a) found that, in the UK, trust in the media was reported as 19%. This is below that afforded to the Civil Service (45%), the police (56%) and the courts and judicial system (62%). This low ranking is not a particularly surprising finding as journalists are consistently ranked among the five least trusted professions, with only 27% of people trusting them to tell the truth in 2023.
This low rating of perceived honesty (27%) is a 6pp. increase from 2022. However, journalists still received a negative net rating of -40%, indicating that public perception is heavily skewed towards the expectation that journalists will not tell the truth (Ipsos, 2024). This is reiterated in the findings of the Edelman Trust Barometer (2025), which shows that, globally, 70% of people believe that journalists ‘purposely mislead people by saying things they know are false or gross exaggerations’. This is an increase from 2021 (59%) and is part of a broader pattern, with respondents reporting being more concerned about being intentionally misled by business leaders (68%), and to a greater degree government leaders (69%), in 2025 than in 2021 (56% and 58%, respectively).
According to the Trust in News Providers report published by the UK Parliament in 2024, there is a shortage of causal evidence for why people trust or distrust the media. This makes prescribing solutions challenging. However, the report does signpost three potential causal factors: frequent social media usage including exposure to polarised views; poor representation and low levels of media diversity; and finally, a reaction to wider political events, alongside personal political affiliations (Bettis, 2024).
Further factors that may influence trust levels include media consumption and exposure (Schranz et al., 2018), with studies showing that habitual engagement may have a positive impact on trust (Frederiksen, 2014). This points to the importance of daily routines and frequent consumption (Tsfati & Ariely, 2013), exemplifying how familiarity can be a positive contributor to trust.
In addition, the media can shape trust, as they direct public focus and contribute to levels of policy salience (Hetherington & Rudolph, 2015). Alongside this, studies suggest that trust in government ‘tends to be boosted’ when media coverage is positive, and trust typically falls under negative media attention (STATEC, 2023).
Studies also show that people tend to believe, and trust, news sources which confirm their existing opinions (Bettis, 2024). This may be reflected in the selection bias of news sources, and people’s low engagement with sources that typically contradict or challenge their existing opinions (Taber & Lodge, 2006). Moreover, studies have also suggested that those who already trust the media are, to some degree, more open-minded to trust-building initiatives than those with low levels of initial trust in the media. For this latter group, better communication in explaining why the strategies have been implemented may be helpful, though this is prefixed on the caveat that the strategies must be authentically adopted and sustained over time (Banerjee et al., 2023).
The task of analysing and improving media trust has become increasingly crucial in recent years. Across the literature, academics and practitioners have proposed solutions. A report published by Reuters Institute for the Study of Journalism identified four approaches to building trust: better aligning news coverage with topics the public say they want; showing transparency and good ethics and avoiding conflicts of interest; ensuring journalistic independence and improving diversity; and finally, ensuring the public feel heard (Banerjee et al., 2023).
Though developed with media and journalism in mind, these four strategies could also be applied to the sphere of official statistics – with topical alignment, transparency, independence and public engagement providing helpful principles to follow.
Looking into one strategy – transparency – in detail, Khan (2025) asks, “Is it working?”. This report draws on previous work carried out by the Reuters Institute and shows that 54% of people indicated that they would be more likely to trust journalists if they explained their decisions about how they report the news (Banerjee et al., 2023). Furthermore, when asked which factors influenced their trust levels, 72% of respondents said that transparency about how the news is made influences which news outlet they trust (Nielsen & Fletcher, 2024).
Khan’s 2025 report also signposted specific strategies which can help “explain decisions about how”, such as open-source investigations, and prioritising replicability as part of the scientific method. Considering the concept of radical transparency, Khan (2025) praises initiatives such as “show your work” for their dedication to transparency. This willingness to show your working out, even if anecdotal stories suggest that the supplementary material may be largely ignored, can have a positive influence on trust. This plea for transparency is an important step in displaying vulnerability, and being open to scrutiny, as part of the trust building process.
This “show your working out” mentality can be applied to official statistics as a positive strategy in trust building. This is of particular interest when official statistics are cited in the media, especially on topics where multiple statistics could each be used to support conflicting narratives. An application of the finding may be to encourage journalists to explain why they selected the statistics that they did in these scenarios.
Universally, there is a recognition that these solutions are not easy and should not follow a one-size-fits-all mentality. Of course, once released, the output is in the hands of the media, and ultimately how it is reported is beyond the producer’s control. However, it is important that official statistics producers – as well as anyone acting in the capacity of an intermediary, or who is involved in the dissemination of official statistics – be mindful of sensationalised headline coverage and media logics.
This points to the importance of ensuring that all statistical outputs are properly caveated, and that the statistics will not be placed in a position where they are taking more weight than they can reasonably bear. Adhering to the strategies and principles outlined in the Code of Practice of Statistics, as well as following the principles of collaborative communication and intelligent transparency, as discussed in a recent blog, may help shield from possible misinterpretation.
2.2 Trust in Relevant Objects
So far, this review has detailed how levels of trust vary depending on the actor in question, with trust in the government, politicians and the media ranked below that granted to civil servants and scientists. Building on the accounts provided, one would expect trust in communication platforms (the object corresponding with the media) to be lower than trust in evidence (the object of scientists). This section explores this expectation, considering the objects to which trust is assigned in turn. This complements the agent-centric approach adopted in section 2.1.
2.2.1 Trust in Communication Outlets, Platforms and Intermediaries
It is important to understand levels of trust in the communication channels used to disseminate official statistics. These include government websites, alongside news outlets, online sources and other intermediaries. These outlets are crucial in the communication of, and the public’s engagement with, official statistics. As such, it is important that the public can trust them.
To situate this section, it is essential to note that this review was conducted without access to the data needed to fully understand precisely which platforms, outlets and media the public are using to access official statistics. With this ambiguity in mind, this section takes an all-encompassing approach and considers levels of trust across official communications, traditional media and online platforms.
There are very limited data on the public’s levels of trust in official communications. For these purposes, the section discussing trust in the government apparatus may provide the most appropriate proxy. The section considers trust in civil servants and other governmental bodies.
Specifically on the topic of platforms, the goal of building trust in government communication is explicitly recognised in the Government Communication Service (GCS) Strategy 2022-2025. This indicates that (in 2022) levels of trust had not surpassed the threshold which the GSC considered to be sufficient.
Monitoring the updated strategy for 2026 could provide an indication of whether, from the perspective of the GCS, trust has reached an acceptable level. Beyond this, further research in this area could be fruitful, specifically research considering public levels of trust in official communications, including GOV.UK as a platform.
Continuing the analysis of communication platforms, this review turns to media outlets – both traditional and online. Data generated as part of the Public Confidence in Official Statistics (PCOS) survey illustrate that the media are often the vehicle through which official statistics are transported to the public (National Centre for Social Research, 2024). Specifically, in 2023, 60% of people reported seeing statistics on the news at least several times a week, and only 4% stated that they had never seen statistics on the news. The same survey also revealed that a slightly lower percentage of people, 49%, reported seeing statistics on social media either daily (20%) or a few times a week (29%).
These high levels of self-reported exposure to statistical outputs via news outlets and online media highlight the role that intermediaries play in shaping how official statistics are communicated to the public. Alongside this, studies have pointed to the existence of a ‘trust gap’ between news media and online platforms (Mont’Alverne et al., 2022). This is interesting as it suggests that the platform from which the public access official statistics may impact their levels of trust in the statistics.
In the past, television was regarded as the public’s most relied-upon news source, holding the ‘crown…. since the 1960s’ (Ofcom, 2024). However, in 2024, Ofcom reported that online outlets had surpassed traditional televised news for the first time and had become the public’s leading source of news (71%). This overtake features a 5pp. increase in one specific mode of news engagement: social media (from 47% in 2023 to 52% in 2024), which has been matched by a fall in the number of people primarily consuming televised news (from 75% in 2023 to 70% in 2024).
With this trend in mind, it may be worth further emphasising direct-to-consumer publication of official statistics. This could be facilitated via social media platforms. Increasing emphasis on a diversified communication strategy, such as this, may broaden the audience scope and help to ensure that the public are informed of official statistics, even if they migrate to other platforms to access news. It is possible that this could mitigate against the effect of declining television viewership on the dissemination of official statistics.
Meanwhile, the same Ofcom study (2024) also reported that traditional outlets remain the most trusted news source (television is the highest-rated source for trust at 69%, followed closely by radio at 68% and printed press at 66%).
Given the relatively high levels of trust in these news sources, there may be value in official statistics producers investing significant time in seeking opportunities to promote their products through television, radio and printed press.
Trust ratings for online news sources, on the other hand, are much lower, with 53% reporting trust in news accessed online and only 43% trusting news circulated via social media (Ofcom, 2024). This suggests that although people may be accessing news more frequently via online intermediaries, they may not trust them to the same degree as traditional media.
Moving beyond trust in the news itself, other studies show that consumption of traditional media is associated with higher levels of trust in several public institutions, compared to those who access news online (STATEC, 2023, p.6). This points to the possibility of there being a spillover between the medium of news which members of the public use and levels of trust in public institutions.
Looking at trust in online platforms in more detail, a survey carried out by the Reuters Institute (Mont’Alverne et al., 2022) revealed that those who access news via online platforms on a daily basis are more likely to trust those platforms than those who use them for other purposes, or not at all. To illustrate this pattern, consider Google, the most frequently used online source for daily news of the seven platforms listed in the research (32% for the UK). In the UK, 83% of respondents who stated that they use Google to access their daily news reported that they trust Google, compared to 75% of those who use Google for other purposes, and 52% of non-users (Figure 2.2). A similar pattern is seen across other platforms. Of course, it may be possible that people turn to platforms which they already trust when searching for a regular source of reliable news. Accordingly, it should be recognised that this is an observation of association, rather than causation.
In the context of official statistics, PCOS reports that the use of official statistics is associated with higher levels of trust in official statistics (National Centre for Social Research, 2024).
Papers also report that news of a political nature tends to be treated as more suspect (Ross Arguedas et al. 2022, as cited in Mont’Alverne et al, 2022). Mont’Alverne et al. also found that UK respondents reported higher trust in news in general than trust in news about politics (53% vs 45%). This is a relevant consideration for official statistics, as, given that they are often produced by governmental departments, it is possible that they may be treated with political suspicion.
Reviews that focused on trust in news providers have also identified evidence that people who access news via social media tend to be more polarised and less trusting, with algorithms pushing certain sources, and echo chambers perpetuating, and intensifying, distrusting opinions (Bettis, 2024). This association between social media and distrust has also been considered in a Public Attitudes to Science case study exploring trust in alcohol research and guidance (PAS, 2019, p,60). The case study showed that social media users expressed confusion and became dismissive when evidence presented to them was contradictory. This scepticism of contradictory evidence may reduce trust in scientific evidence, and have a knock-on effect to other related areas.
With this in mind, it is important that, when communicating official statistics which feature conflicting messages or messages which may contradict the public’s established opinions or experiences, producers and intermediaries remain mindful of the possibility of being met with suspicion and withdrawal and consider how to present statistical outputs in ways which are less jarring to the audience’s worldview.
As presented in earlier in this section, this is only going to become more important as the trend of people relying on social media for information increases. Similarly, the potential for conflicting narratives in statistics may increase as official statistics continue to move from predominantly survey-based estimates towards estimates from a mixture of survey and administrative data.
Interestingly, however, studies suggest that advocating for the removal of social media, and a return to traditional news outlets, does not necessarily equate to increased levels of trust. This is exemplified in a study conducted by STATEC (the National Institute for Statistics in Luxemburg) in 2023. This study showed that whilst overall levels of trust in institutions were higher for those who accessed information about current affairs and the government from TV than those who accessed information from the internet, for both cases (traditional media and the internet), those who did not access any information from these sources reported lower levels of trust than those who used these sources to access information.
To illustrate this, frequent exposure to information on the internet had a positive effect on levels of trust in STATEC (+3pp.), whereas avoidance of information on the internet had a negative effect (-14pp.). This is counterintuitive as, based on the premise that consumption of news via online sources is thought to have a negative impact on trust (i.e., Bettis, 2024), one would expect no engagement with these online sources to result in positive levels of trust (as opposed to the negative coefficients reported in Figure 9).
Learning from this observation, official statistics producers and intermediaries may wish to consider a range of communications outlets and channels when disseminating official statistics within the public sphere. In addition to reaching a broader scope of audiences, a wide communication network prevents certain channels being neglected.
2.2.2 Trust in Evidence
As discussed in section 2.2.1, trust in news media is declining. This makes communicating official statistics more challenging, as the likely mode of delivery is facing increasing scrutiny.
When it comes to communicating evidence, it is important to be mindful of the wider context, as scientific evidence can only provide a ‘common factual baseline for public discourse if it has widespread support from the public’ (Younger-Khan et al., 2024, p.2). Additionally, incorporating Schäfer’s reflections on ‘mediated trust in science’ (2016), this picture is further complicated by ‘trust intermediaries like media…which may provide symbolic indicators’ (Bentele, 1994, as cited in Schäfer, 2016), thereby ‘doubl[ing] the configuration of trust’ (or distrust) and shaping public attitudes (Kohring, 2004, p.165, as cited in Schäfer, 2016). Adding to this, the British Academy (2024) also points to spillover effects and warns that distrust in politics can spill over to distrust in evidence. This review takes note of this implication that trust does not exist in a vacuum and recognises that the levels of trust the public assign to one actor, object or entity may be influenced by, and can have an influence upon, other actors, objects and/or entities.
That being said, the current section homes in on the question of trust in scientific evidence (the object). Accordingly, it highlights studies which investigate trust in scientific evidence, providing a picture of current levels, and signposting strategies for improvement. This distinction is prefixed on the suggestion by the British Academy that maintaining a clear distinction between the logics of politics (mobilising support) and scientific evidence (producing knowledge) – and effectively communicating these parameters to the public – may help stem some of the spillover effects (the British Academy, 2024).
Although reflective of a pre-COVID-19 picture, the results of the most recently published Public Attitudes to Science (PAS) survey from 2019 provide a helpful illustration of public levels of trust in scientific evidence (Department for Business, Energy, Industrial Strategy, 2020). In this survey, 50% of respondents believed that the information they hear about science is generally true (43% tended to agree, 7% strongly agreed). Meanwhile, only 8% disagreed (7% tended to disagree, 1% strongly disagreed). Interestingly, when asked to qualify their reasoning, participants’ responses more often referred to ‘a general feeling or instinct’ as opposed to specific reflection that related to scientific evidence.
Respondents of the same survey (PAS, 2019) also reported a default position of trusting science, largely because they had no reason not to. This reflects the idea of ‘resigned trust’ – which refers to a combination of apathy and insufficient knowledge or will to question one’s position. To paraphrase Schäfer’s comments, ‘trust is a substitute for knowledge and control’ (citing Kohring, 2001), and in situations of insufficient knowledge, there is little option but to trust (2016, p.3). This passive attitude was also observed for those who had expressed a negative view, with distrust being their default response, until they were provided with evidence to confront their position. Unfortunately, comparable post-pandemic data on the Public Attitudes to Science survey are not yet available (those interested in monitoring trust levels should look out for the fifth wave, which is scheduled to be completed in spring 2025).
Thus far, societal and political changes have been alluded to with brevity. However, in reviewing the literature on trust in evidence, it would be a glaring omission not to reflect on ‘science-related populism’ as part of the anti-elite backlash which has gained momentum in the current political juncture. Science-related populism applies the same antagonistic framework as political populism. However, instead of positioning ‘the people’ against ‘the political elite’, it positions ‘the people’ against ‘the academic elite’ and claims that scientific evidence is inferior to the ‘common sense of the people’ (Mede & Schäfer, 2020). This may undermine trust in evidence-based decision making, as science-related populism denies the expert credentials of scientists and proffers narratives that they are not acting in line with the public’s best interest (Cologna et al., 2025, p.714).
This positioning of common sense as antagonistically opposed to scientific evidence has also been reflected in concerns voiced by the public specifically in relation to official statistics. This positioning relates to concerns that official statistics do not reflect lived experiences and appear to some as contradictory to common sense. Criticism of inflation statistics provides one such example of people viewing statistics as not being reflective of their experiences. This criticism stems from the observation that inflation for the poorest 10% of households was actually 12.5% in October 2022 – notably higher than the headline figure of 11% (David, 2022). This highlights the importance of ensuring that official statistics reflect the common sense of different groups, as from the perspective of the poorest 10%, this figure did not reflect their lived experiences.
As a recommendation, producers should make efforts to ensure that official statistics are in line with the lived experiences of the different publics. Strategies to improve the personalisation capacity of official statistics may help to remedy concerns of this nature.
ONS has developed an example, the “personalised inflation rate calculator”, which is designed to allow the public to generate a more representative picture of what the statistics mean for them in their everyday lives (ONS, 2022).
In addition to providing bespoke and tailored statistical products to reflect common-sense experiences, transparent action, such as explaining “how” the evidence has been incorporated, is also encouraged to increase trust levels (the British Academy, 2024).
Sense about Science has been advocating for this as part of its #ShowYourWorkings campaign, as well as producing resources such as the Evidence Transparency Framework (an evaluative tool which scores the transparency of evidence from 0 to 3). These examples share similar overtones to the discussion surrounding transparency in the section dedicated to building trust in media actors. This once again demonstrates that explaining the process, and being transparent in your working out, can better position the evidence and hopefully prevent outright dismissal.
Further strategies recommended in the Public Trust in Science for Policy-making report highlight the importance of communication and understanding underlying attitudes (the British Academy, 2024). Specifically, the report mentions styles of communication, which echoes the previous discussion (section 2.1.3) about the trade-off between being open and relatable, versus exemplifying competence and scientific rigour, as well as emphasising the importance of communicating uncertainty.
As Kerr et al. (2023) explain, the myriad of ways that uncertainty can be presented necessitates a considered, and careful, approach. Based on a large survey experiment focused on information relating to COVID-19, they found that ‘an explicit verbal statement of uncertainty… decreases the perceived trustworthiness of the [number]’ (p.11–12). Yet providing a numerical range cue may ‘buffer against future damage if figures are revised’ (p.13). Alongside this caveat, Kerr et al. (2023, p.3) also note that the time frame, the language used and the field or topic may alter the effects of communicating uncertainty. To quote van der Bles et al., ‘in some decision settings, people might expect uncertainty’ (2019, p.20). For instance, as Joslyn and LeClerc (2013, as discussed in van der Bles et al., 2019) report, warnings of uncertainty around weather forecasts may be more forgiving and actually contribute to improved trust and more accurate expectations.
Based on this, although there is consensus about the need to communicate uncertainty, producers should consider specific guidance which is applicable to the type of statistic they are working with.
Van der Bles et al.’s (2019) review develops a framework for communicating epistemic uncertainty based on Lasswell’s model of communication. Statistical producers may find this to be a helpful resource. OSR has also reviewed ways in which uncertainty can be communicated in statistics, which may be useful to producers.
With regard to understanding underlying attitudes, the Public Trust in Science for Policy-making report highlights the spillover of political distrust and suggests that people process information in a biased fashion (the British Academy, 2024).
This suggests that it is important to be mindful of underlying attitudes when communicating official statistics which may contradict the public’s established opinions or experiences.
Alongside this, the report also outlines the philosophical stance of the British Academy, which points to the unsuitability of the “deficit” mentality and notes that ‘simply providing more evidence is unlikely to shift attitudes’ (2024, p.6). In the context of this review, there is an important distinction to highlight here: “more evidence” is not the same as more-detailed workings out, or a more thorough account of the decisions underpinning the statistical output. In line with this, the advocation of “more evidence” as an attempt to “plug the knowledge gap” should not be confused as contradictory to, or mistaken for, the recommendation to provide more transparency and depth.
Rather, the recommendation proposed in this review is a qualitative change in the way the evidence is presented (to provide the necessary detail in a way which users can easily understand), not a quantitative prescription to provide “more”.
Additionally, studies have also alluded to the possibility that the method of exposure (direct or mediated) to scientific evidence influences trust in that evidence. However, further research is needed to establish a systematic picture of ‘what kinds of media representations effectively trigger trust in science’ (Schäfer, 2016, p.4). Some studies have shown that the way evidence is reported may not be a predictor of trust in science (Wintterlin et al., 2022), whereas others, to quote Younger-Khan et al., find that ‘exposure to misinterpreted, sensationalised information or pseudoscience can lead to distrust or scepticism’ (2024, p.5).
Focusing on official statistics, Radermacher warns that ‘trust could easily be lost because of misunderstandings and wrong perceptions or expectations’ (2020, p.72). This points to the importance of credible communication as a method to convey to audiences that the statistical output has met the quality criteria and is “fit for purpose”. With this in mind, Blastland et al. (2020) developed ‘Five rules for communication’. These are: inform not persuade; offer balance, not false balance; disclose uncertainties; state evidence quality; and inoculate against misinformation (as summarised by the British Academy, 2024, p.29).
Continuing the theme of communication, technical and specialist jargon, which can be isolating for users, should be avoided. Instead, producers should explain the complexities of statistical processes in a simple and easily interpretable manner.
This is important for trust, as it exemplifies transparency and makes statistical outputs accessible to a wider audience. In this respect, using exclusive language can be interpreted as a barrier to meaningful understanding, and result in misunderstanding, confusion and/or isolation. Consequently, working to dismantle some of the barriers that technical jargon constructs and “lift the curtain” on the mystery of statistics may be conducive to building trust.
The final point to highlight on this theme relates to the topic, or area, of evidence. Despite overall reporting a weak positive relationship in favour of the “knowledge deficit model”, a meta-analysis carried out by Allum and colleagues in 2008 suggests that the area of science – including whether it is contentious or not – may possibly alter the relationship between knowledge and attitudes (with willingness to trust being treated as attitudinal). Specifically, in reviewing the literature, they highlight a study carried out by Evans and Durrant (1995) which showed that the more people learnt about the science relating to human embryos (regarded as a contentious issue), the more negative their attitudes towards science became. Although insufficient to discredit the assumptions of the knowledge deficit model entirely, the observation that, in certain situations, increased awareness may result in the inverse outcome (of negative attitudes) is noteworthy. Studies such as this promote caution and indicate that when it comes to prescribing increased knowledge, the picture may be more mixed than first assumed.
In addition, studies also suggest that trust levels depend on the field of science. This builds on academic debates which refer to a hierarchical distinction between ‘hard’ and ‘soft’ science, with ‘scientific’ virtues more commonly ascribed to the former. Intuitively in line with this, Younger-Khan et al. (2024) developed a vignette experiment which found that hard sciences (genetics and material sciences) were considered to be more trustworthy than the softer sciences of economics and education. Relying on a parsimonious measure developed by Hendriks et al. (2015), the same research also reported higher mean outcomes for the harder science disciplines in terms of competence (expertise), honesty (integrity) and responsibility (benevolence).
Further research to ascertain whether this distinction influences the levels of trust in different official statistics would be useful. Specifically, this may uncover whether statistics relating to “hard” fields are regarded as more trustworthy than those capturing “softer” areas.
2.3 Trust in Official Statistics
As mentioned earlier in this review, surveys, and other studies more broadly, dedicated to the theme of understanding levels of public trust in official statistics are sparse. There are, however, a few exceptions, which will be highlighted here.
The studies detailed in this review provide helpful insights relating to the question of trust in official statistics. Nevertheless, some of the design choices rely on a pre-determined list of options or stop short of asking respondents what strategies can be adopted to help remedy the low levels of trust they report. This limits responses in line with the possibilities prescribed, and may miss important, and perhaps even widely shared, reasons for the public’s decision to trust, or distrust, official statistics.
Consequently, over and above reporting on existing studies, additional primary research has been carried out as part of this project. This provides further qualitative insights in an unstructured, user-led format. This primary research is based on free-text responses which members of the public provided as part of OSR’s research project, Statistics in Personal Decision Making. The responses to the question ‘what might increase your trust in official statistics?’ were thematically analysed and themes are outlined. Where applicable, commonalities and discordance will be signposted within the upcoming section. The detailed analysis is presented in Appendix 1.
2.3.1 Trust in Producers of Official Statistics
Before moving to report what members of the public thought would help build trust in official statistics as a product, this section will start by focusing on one prominent, high-profile producer, the Office for National Statistics (ONS). Within the UK statistical system, official statistics can be produced by or on behalf of central and devolved governments, and by others (listed in secondary legislation). ONS is a non-ministerial department that independently produces official statistics.
Evidence from the Public Confidence in Official Statistics (PCOS) survey in 2023 shows that 87% of respondents who provided an answer reported that they trust ONS, which is a higher percentage than those reporting to trust the courts (82%), the government (31%) and the media (25%) (National Centre for Social Research, 2024). It should be noted that while the survey aims to be broadly representative of the population in Great Britain in terms of a range of demographic factors, an adjusted household-level response rate of 21.2% means that the achieved sample may differ from the wider population in other ways. For example, those who trust ONS may have been more likely to complete the survey.
Further insights from PCOS reveal that those who used official statistics reported higher assessments of the accuracy and independent integrity (being free from political interference) of official statistics than non-users (91% vs 80% and 82% vs 68%, respectively). Moreover, PCOS also revealed that trust, for both the statistical output and ONS as a producer, was higher for people who use statistics than non-users (99% vs 82% and 98% vs 80% respectively). This reiterates the observation that use and familiarity may increase trust level.
Further evidence that public profile and wider familiarity are conducive to fostering a more trusting and receptive audience can be learnt from other contexts. For example, in examining institutions in the Czech Republic, Lyons (2013) highlights the importance of visibility and shows that institutions with high salience are more likely to be trusted. A similar pattern of familiarity being associated with higher trust levels was noted in a survey carried out in 2023 by the Northern Ireland Statistics and Research Agency (NISRA). This survey reported that public awareness of the producer is associated with higher levels of trust in the outputs they deliver.
Turning to the question of why people trust, or do not trust ONS, the PCOS (2023) survey finds that not having a vested interest and their level of expertise were the most frequent responses given for why respondents trust in ONS (63% and 56%, respectively) (National Centre for Social Research, 2024). Meanwhile, when asked to elaborate on reasons for distrusting ONS and the statistics it produces, not telling the whole story (45%) and misrepresentation by external actors, i.e., politicians (49%) or the media (38%), were the top three responses. Whilst these responses were selected from a pre-determined list which was provided to participants, these concerns were also shared within the free-text format. Specifically, the findings of the primary research conducted for this report (as discussed in Appendix 1) reveal that in the free-text format, respondents suggested that remedying the manipulation or skewed presentation of official statistics would be a fruitful area for dedicated improvement.
2.3.2 Effective Communication
Whilst these concerns around distorted communication and possible manipulation may be external and, to some extent, beyond producers’ control, the importance of effective communication has been emphasised in other surveys. Thus, it remains a recurrent theme which is often advocated as a route to build trust in official statistics.
For instance, respondents of a survey titled “Official Statistics: Perceptions and Trust”, which featured in the 2005 Statistics Commission report, recommended a measured approach to using statistics (though this reference is perhaps slightly outdated).
Building on this, recommendations include not allowing statistics to take on more weight than they can reasonably bear and increasing efforts to improve communication with users, particularly with regard to interpretation, accessibility and use by a non-technical audience.
Likewise, a second survey which featured in the UK Statistics Authority report on Strengthening User Engagement (2009) repeated many of these pleas. It calls for clearer communication and the inclusion of the necessary contextual information to aid interpretation. This is clearly reminiscent of the responses provided in the 2004 survey.
Equally, the primary research, which has been conducted as part of this project (described in the appendix), also identified issues with the communication strategies which accompany the circulation of official statistics. Respondents pointed to the need to be more explicit when acknowledging the limitations of the statistical product, as well as advocating for simplicity, wider publication and clearly articulating the value that official statistics can provide.
Taking these suggestions into account, communicators may wish to make a concerted effort to highlight the relevance of any statistical output. In so doing, they should use simple language that is easily accessible, and that avoids an overreliance on statistical jargon or existing analytical knowledge.
Recommendations relating to communication also feature elsewhere within this review, and detailed evidence to support their advocated implementation is provided. This indicates that mechanisms to improve trust may be echoed and shared across different bodies and outputs.
2.3.3 Quality Products
Furthermore, additional interesting insights can be gained from the “Official Statistics: Perceptions and Trust” survey (Statistics Commission, 2005). This survey presents a positive picture of trust levels in 2004, with interviewees rating the quality of UK official statistics as being the ‘best in the world’. This points to the importance of maintaining a reputation of being high quality, with a first-rate product noted as being important to members of the public.
The results of the present primary analysis also highlight how it is important to communicate the quality of official statistics. For example, respondents expressed an interest in seeing a more detailed account of the methods and suggested this may improve confidence in the output.
Also relevant from the perspective of quality, the findings of the primary research reveal that members of the public consider that regular updates and the accessibility of a statistic positively contribute to levels of trust (see Appendix 1). Moreover, reiterating the findings discussed earlier in this review, the primary research also points to the benefits of providing paper trails and showing your working out.
In line with this, statistical producers should ensure they are clearly communicating the participant recruitment protocols, data collection procedures and analytical methods utilised in the production of the statistical output.
2.3.4 Spillover Effects
Moving on to other studies, this review considers what can be learnt from other countries. Looking at trust in official statistics in Luxemburg, STATEC (2023) performed a regression analysis to show that trust in official statistics can increase the likelihood of trusting other governmental bodies and institutions. Whilst this analysis does not provide evidence for spillover effects moving towards official statistics, it does show how trust in official statistics improves the likelihood that people trust other parts of the government apparatus. This provides evidence in favour of network dynamics, showing how trust can be diffused and shared across government bodies, and the outputs they produce.
Echoing the observation of possible spillover effects, a study conducted in France (Chiche & Chauverie, 2016) suggested that trust in official statistics is correlated with trust in political institutions (such as government, parliament and president). Understanding whether this correlation also occurs in the UK context, and whether the relationship works in reverse (i.e., does increasing trust in producers increase trust in official statistics?) is an avenue worthy of further exploration.
2.3.5 Transparency
Continuing this mindset of learning lessons from overseas, Zeelenberg’s (2012) examination of levels of trust in official statistics in the Dutch context is considered. Although the figures relate to Dutch citizens, the conclusion that ‘the public must regard official statistics as undisputed’ is worth highlighting and may be valuable in the context of this report. To clarify, by ‘undisputed’, Zeelenberg does not mean unquestioned. Rather, the recommendation proposed relates to promoting transparency and access in order to provide no reason for the public to dispute official statistics. This is reminiscent of OSR’s initiative of intelligent transparency, which reflects the importance of providing data in an accessible and clear way.
Resonant of O’Neill’s (2013) plea to reserve trust for that which is trustworthy, Radermacher recommends that, rather than pleading with individuals to have ‘blind faith in statistics’, transparency and participation should be adopted to enable trust to be built through mechanisms of experience (participation) and evidence (open transparency) (2020, p.171). These two pillars tie in with further themes identified as part of the primary analysis, with transparency and personal experience mentioned as possible avenues to help build trust in official statistics.
In proposing recommendations, the suggestions to improve transparency are considered in tandem with comments relating to appeals for impartiality to be guaranteed and requests for official statistics to be audited, monitored and verified. Bringing all these responses together, it seems that signposting may helpfully contribute towards the exemplification of transparency, impartiality and ‘true independence’.
Consequently, those involved in the communication and dissemination of statistical outputs may wish to dedicate further efforts to overtly highlighting, where applicable, any review process that the statistic has undergone, as well as underscoring the impartiality of statistical producers.
2.3.6 Reflect Personal Experiences
Echoing the pillar of experience that Radermacher (2020) alludes to, Rupert et al. (2018) highlight the importance of “subjective statistics”. In terms of strategies, co-production and collaboration are recommended to help overcome barriers of poor representation and issues surrounding data detachment. An advocate of “citizen science”, Rupert et al. promote an approach which combines statistical science and the lived experiences of citizens in order to produce representative data. In so doing, the report challenges the notion that there is only one route to achieving the status of “official” and instead promotes user-driven collaboration in the creation of facts.
Reflecting on this diversity of experience, Rupert et al. (2018, p.180) caution of the issues with ‘calling out bad numbers’ and explains how this should not be positioned as competition between fact and fiction because this reinforces a mistaken belief that there is one accurate number which represents objective fact. In practice, this approach highlights the importance of acknowledging that irrespective of whether the numbers reported in statistics are assigned the verdict of good or bad, they ‘inevitably involve normative judgements about social meaning’ and represent only one possible version of knowledge production. This points to the importance of monitoring who is included (and excluded), with workshops highlighting the importance of representative collaboration which ensures that citizens are continually involved in the processes of statistical production.
Others have taken an alternative outlook and highlighted the importance of ‘speaking out when the evidence is unpalatable’ (Pullinger, 2020). In addition to the importance of speaking out in disagreeable situations, the performance–trust model of trust building would imply that being held accountable for poor performance is crucial to build trust (or more accurately, prevent existing trust from being undermined).
Nonetheless, Ruppert et al.’s (2018) point remains valid, and a ‘care-full’ approach to ‘calling out bad numbers’ is important, especially as these ‘bad numbers’ may reflect citizens’ lived experiences and calling them out may intensify feelings of exclusion.
This sentiment was identified in the primary analysis, whereupon taking steps to ensure statistics reflect users’ experiences was mentioned by respondents as a potential mechanism to increase trust in official statistics. Providing a summary, respondents suggested that statistical producers could provide personalised statistical outputs which reflect the users’ current situation and/or locality, as well as suggesting that they further invest in efforts to communicate user relevance.
This suggests that signposting user relevance and making clear and direct comparisons to personal experiences may help build trust in official statistics.
Following a similar theme, according to PCOS, 45% of respondents who provided an answer indicated that the belief that official statistics alone do not tell the whole story contributed to feelings of distrust (National Centre for Social Research, 2024).
In some situations, it may be unrealistic to expect official statistics to tell the ‘whole story’. For these situations, it would be important to be upfront about which aspects are and are not included in the data, and where possible, to signpost to other complementary information that may complete the picture.
The paper written by Allegrezza (2022) also reflects on the disenfranchisement that individuals can experience when they do not consider themselves to be reflected in the “average man” account which official statistics represent. In the article, they highlight the negative ramifications of numbers as muting individual distinctiveness and explains how this can then cause frustration.
Complementing official statistics with other sources, such as qualitative analysis, may help demonstrate the person-level stories that may be masked in population-level statistics.
In summation, this approach suggests that portraying statistics as something that must be objective and universal is actually harmful, as this has the effect of shutting down debate and contestation, dismissing the importance of lived experience. This may have further repercussions in terms of trust levels.
Finally, dedicating attention to the importance of reflecting lived experience is particularly prominent because, as this review has shown, being able to see yourself in the data is a principal foundation of trust in the end product.
In response to this, Ruppert et al. (2018) suggest that trust in official statistics can be achieved through the legitimisation mechanisms embedded in co-production as a central feature of statistical production.
In addition to this, as the primary research reveals, providing a fuller, and more comprehensive, picture that displays results from ‘a wide range of areas and people’ may help to remedy concerns about the statistics not telling the full story.
Lastly, by increasing efforts to provide personalised statistics that reflect an individual’s story, producers may be able counteract feelings of being underrepresented, or neglected, in the ‘story’ that the official statistics tell.
