Hostname: page-component-cb9f654ff-65tv2 Total loading time: 0 Render date: 2025-09-03T14:13:26.296Z Has data issue: false hasContentIssue false

How Language Shapes Belief in Misinformation: A Study Among Multilinguals in Ukraine

Published online by Cambridge University Press:  26 August 2025

Aaron Erlich*
Affiliation:
Center for Social Media and Politics, New York University, New York, NY, USA Department of Political Science, McGill University, Montreal, QC, Canada Centre for the Study of Democratic Citizenship, Montreal, QC, Canada
Kevin Aslett
Affiliation:
Center for Social Media and Politics, New York University, New York, NY, USA
Sarah Graham
Affiliation:
Center for Social Media and Politics, New York University, New York, NY, USA
Joshua A. Tucker
Affiliation:
Center for Social Media and Politics, New York University, New York, NY, USA Wilf Family Department of Politics, New York University, New York, NY, USA
*
Corresponding author: Aaron Erlich; Email: aaron.erlich@mcgill.ca
Rights & Permissions [Opens in a new window]

Abstract

Scholarship has identified key determinants of people’s belief in misinformation predominantly from English-language contexts. However, multilingual citizens often consume news media in multiple languages. We study how the language of consumption affects belief in misinformation and true news articles in multilingual environments. We suggest that language may pass on specific cues affecting how bilinguals evaluate information. In a ten-week survey experiment with bilingual adults in Ukraine, we measured if subjects evaluating information in their less-preferred language were less likely to believe it. We find those who prefer Ukrainian are less likely to believe both false and true stories written in Russian by approximately 0.2 standard deviation units. Conversely, those who prefer Russian show increased belief in false stories in Ukrainian, though this effect is less robust. A secondary digital media literacy intervention does not increase discernment as it reduces belief in both true and false stories equally.

Information

Type
Preregistered Report
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of American Political Science Association

Introduction

Our cumulative knowledge about belief in misinformation predominantly comes from surveying English-speaking Americans about misinformation written in English from American media sources (Allcott and Gentzkow, Reference Allcott and Gentzkow2017; Clayton et al., Reference Clayton, Blair, Busam, Forstner, Glance, Green, Kawata, Kovvuri, Martin, Morgan, Sandhu, Sang, Scholz-Bright, Welch, Wolff, Zhou and Nyhan2020; Pennycook and Rand, Reference Pennycook and Rand2020; Pennycook et al., Reference Pennycook, Cannon and Rand2018).Footnote 1 However, the global media environment is complexly multilingual. Half of the global population uses two or more languages or dialects in their daily life (Ansaldo et al., Reference Ansaldo, Marcotte, Scherer and Raboyeau2008; Grosjean, Reference Grosjean2010) and, therefore, likely consumes media, including misinformation, in multiple languages from both within and outside of their borders. As the consumption of cross-border media increases and multilingual media with it (PwC-UK, 2016), the distribution of false or misleading news in different languages poses substantial political consequences. Reporting news in different languages can differentially mobilize populations (Onguny, Reference Onguny2019), sometimes based on false or misleading evidence that escalates political violence (Ismail and Deane, Reference Ismail and Deane2008). Differential belief in misinformation strengthens ethnopolitical divides (Somerville, Reference Somerville2009) and has been linked to increased levels of affective polarization (Lau et al., Reference Lau, Andersen, Ditonto, Kleinberg and Redlawsk2017; Stewart et al., Reference Stewart, Plotkin and McCarty2021; Suhay et al., Reference Suhay, Bello-Pardo and Maurer2018), which can weaken the foundations of liberal democracy (Kuklinski et al., Reference Kuklinski, Quirk, Jerit, Schwieder and Rich2000). Given both these potentially troubling consequences for democracy and the rise of multilingual citizens who consume news in different languages in a single media market, it is imperative to develop a more comprehensive understanding of how news consumers assess the veracity of news in different languages. In this registered report, we conduct a study that addresses the broad research question: Are individuals more or less susceptible to misinformation in their non-preferred language?

Previous research on language and belief in misinformation has focused on identifying a language proficiency effect, but we propose that language may affect belief in misinformation even if individuals are similarly proficient in both languages. For example, language may signal credibility or elicit an emotional response, which may, in turn, affect belief in misinformation written in that language. A linguistic minority may be more skeptical of news written in a different language used by the majority because they have become distrustful of media that communicate using that language, or because of tensions with the majority linguistic group. Cues unique to a language and context, such as these, may explain why past studies measuring the effect of language proficiency on belief in misinformation have reported inconsistent results across different languages. Rather than a proficiency effect, we test whether there is a general language cue that affects even those who are equally proficient in their less-preferred language. Since previous research has found that emotion affects belief in false news, but not true news (Martel et al., Reference Martel, Pennycook and Rand2020), we focus on belief in misinformation; however, we also perform exploratory analysis measuring the effect of language on belief in true news.

In this registered report, based on a peer-reviewed pre-analysis plan (PAP),Footnote 2 we test whether language cues affect belief in misinformation in a country where most individuals are similarly proficient in two languages. To determine whether or not language indeed affects belief in misinformation regardless of proficiency level, we conduct a survey experiment in Ukraine in which we randomly assign the language (either Ukrainian or Russian) of news articles to bilingual respondents, regardless of which language they prefer, in the days immediately after the publication of an article. Our primary research question is: Are multilingual individuals more skeptical of misinformation produced in their less preferred language? We secondarily test a tips and tricks intervention used in other contexts to see if previously reported positive effects hold in the Ukrainian context.

Theory and hypotheses

The preponderance of work on language and misinformation has focused on identifying a language proficiency effect by investigating two modes of cognition: an effortless mode based on heuristics and a more reflective mode based on deliberation (Costa et al., Reference Costa, Vives and Corey2017; Corey et al., Reference Corey, Hayakawa, Foucart, Aparici, Botella, Costa and Keysar2017; Keysar et al., Reference Keysar, Hayakawa and An2012; Muda et al., Reference Muda, Pennycook, Hamerski and Białek2023). These studies’ inconsistent results suggest that reading news in one’s less proficient language may not have the same effect across languages and contexts. Language influences information processing through multiple mechanisms: as a source cue affecting perceived credibility (Dragojlovic, Reference Dragojlovic2015; Sundar and Nass, Reference Sundar and Nass2001), through cultural and cognitive priming effects on bilinguals (Boroditsky, Reference Boroditsky2006; Ross et al., Reference Ross, Xun and Wilson2002; Trafimow et al., Reference Trafimow, Silverman, Fan and Law1997), and by activating political identities (Flores and Coppock, Reference Flores and Coppock2018; Pérez and Tavits, Reference Pérez and Tavits2019). These effects are context-dependent – in Ukraine, we expect both majority and minority language groups may be less likely to believe misinformation in their less-preferred language, though for different reasons.

Within a country, minority groups with distinct languages can be skeptical of news that is written in the language used by the majority because mainstream news in the majority language often portrays minority groups in a negative light (Keshishian, Reference Keshishian2000; Mastro, Reference Mastro, Bryant and Oliver2009; Tukachinsky et al., Reference Tukachinsky, Mastro and Yarchi2015). This skepticism can push those in the linguistic minority to consume sectoral or extranational media that often use different languages (Tsfati and Peri, Reference Tsfati and Peri2006). Minority groups’ divergence in trust can gain prominence during periods of crisis (Vihalemm et al., Reference Vihalemm, Juzefovičs and Leppik2019). Indeed, it is likely that during a crisis, mainstream news in the majority language can promote separate narratives that alienate minorities who are already skeptical of news in the majority language. Therefore, in such situations, it is possible that those who prefer a minority language are less likely to believe misinformation if it is written in the majority language (i.e., their less-preferred language). In Ukraine, we can test if this is indeed the case by surveying those who prefer Russian and measuring the effect that reading misinformation in their less-preferred language, Ukrainian, has on belief in that misinformation during a crisis (Russia’s 2022 full-scale invasion of Ukraine). Although the status of the Russian language is changing quickly in Ukraine and is a debated question, in our pilot, we found that almost the entire population of Ukraine reports high proficiency in both languages. Given their minority status in Ukraine, we expect those who are Russian-preferring in Ukraine to be more skeptical of misinformation written in Ukrainian than in Russian.

International conditions can also create language cues for majority language users, particularly when a foreign power uses a minority language for disinformation campaigns (StratCom, 2015). In Ukraine, Ukrainian-preferring news consumers may associate Russian-language news with foreign actors, potentially reducing belief in that information – a phenomenon partially supported by observational research showing Ukrainian language users are less likely to believe pro-Kremlin disinformation (Erlich and Garner, Reference Erlich and Garner2021).

First, given the literature, we test hypothesis H1:

H1 Individuals are less likely to believe a false/misleading article written in their less preferred language than in their more preferred language.

However, we have posited different mechanisms for belief in misinformation depending on whether an individual prefers to speak a minority or majority language. Therefore, we investigate whether there is support for our hypotheses among minority and majority users in the country. To do so, we test H1 with solely Russian-preferring respondents and separately with solely Ukrainian-preferring respondents, which are subgroup tests of H1.

Second, while we cannot randomly assign mistrust of the central government, we can test some correlational observable implications of our causal mechanism by hypothesizing that among those who prefer Russian, as their distrust in the central government increases, they will be less likely to believe misinformation in Ukrainian relative to Russian.

H2a Among those who prefer the Russian language, the negative marginal effect of reading news in Ukrainian (relative to Russian) on belief in misinformation will be greater as distrust of the central government increases.

Third, we posited that among those who prefer Ukrainian, as animus towards Russia increases,Footnote 3 they will believe misinformation in Russian less, relative to misinformation in Ukrainian. Again, we cannot randomly assign animus (nor should we), but we can examine correlational support for our mechanism. Hence, we test:

H2b Among those who prefer the Ukrainian language, the negative marginal effect of reading news in Russian (relative to Ukrainian) on belief in misinformation will be greater as animus towards Russia increases.

In addition to testing the effect of language on belief in misinformation, Ukraine offers an opportunity to test the effectiveness of popular media literacy interventions on media audiences subject to massive information literacy programs in a propaganda-saturated information environment. To this end, we set out to measure the effectiveness of one of the most popular media literacy interventions, Facebook’s “Tips to Spot False News” in Ukraine.Footnote 4 Since Russian initially invaded Ukraine in 2014, Ukraine’s information environment has become saturated with disinformation (Pasitselska, Reference Pasitselska2017; Szostek, Reference Szostek2017) and divergent interpretations of events (Koltsova and Pashakhin, Reference Koltsova and Pashakhin2020; Szostek, Reference Szostek2018). In response, information literacy courses designed to inoculate individuals from misinformation have become popular and integrated into the curriculum in public schools in Ukraine.Footnote 5

Previously, Facebook’s “Tips to Spot False News” was tested in countries where propaganda and information literacy programs were not prevalent (the United States and India in 2019), which leaves us uncertain as to its effectiveness in areas where literacy programs and propaganda have become quite prevalent, such as Ukraine. Therefore, we test the following hypotheses:

H3a When individuals read the “tips to spot false news,” they will be less likely to believe a false/misleading article than those who do not read these tips.

H3b When individuals read the “tips to spot false news,” they will be more likely to correctly discern between a false/misleading article and a true article than those who do not read these tips.

We also pre-registered nine secondary exploratory analyses, for which, due to space constraints, we mainly present the results in Online Appendix O (although some are referenced in the main text).Footnote 6

Research design

We conducted a 10-week survey experiment in Ukraine, from May 15 to July 19, 2024, to test our hypotheses. Ukraine is an ideal case to test our hypotheses because most of its citizens are bilingual news consumers in Ukrainian and Russian. Each week, we used quota sampling to ensure geographic, ideological, and linguistic variation. While the full-scale invasion impacted all of Ukraine to various degrees, two regions were hardest hit: the East and the South, which also have historically had the most Ukrainians who use Russian as a primary language (although this has begun to change in recent years). We met our quotas in the East; however, we ultimately did not reach our full quota in the South. Further research could investigate how war-affectedness correlates with our treatment effects, although Appendix O.6 shows that, for this study, the coefficients from the South are similar to our overall estimates’ effect sizes.Footnote 7

Previous research measuring belief in misinformation has yet to integrate important findings about how individuals consume misinformation, limiting inference from these studies. Specifically, misinformation is consumed very quickly after publication (Starbird et al., Reference Starbird, Dailey, Mohamed, Lee and Spiro2018; Vosoughi et al., Reference Vosoughi, Roy and Aral2018), but most research asks respondents to evaluate months- or years-old fact-checked misinformation (Bronstein et al., Reference Bronstein, Pennycook, Bear, Rand and Cannon2019; Clayton et al., Reference Clayton, Blair, Busam, Forstner, Glance, Green, Kawata, Kovvuri, Martin, Morgan, Sandhu, Sang, Scholz-Bright, Welch, Wolff, Zhou and Nyhan2020; Pennycook and Rand, Reference Pennycook and Rand2020). To address this limitation, we create a transparent, replicable, and pre-registered news article selection process that sources popular false/misleading and true articles within 24 h of their publication and subsequently distributes the full articles for evaluation to respondents in Ukraine. Our respondents evaluate these popular articles within 48–96 h of publication. This process ensures that we are measuring the effect of language on belief in popular misinformation in the time period that individuals are most likely to consume this misinformation. Our method also reduces researcher bias in article selection.

For each of the study’s 10 weeks, we collect and distribute a new group of five articles for each respondent to evaluate in randomized order. In the first two weeks, three out of five of these articles come from political websites known to produce low-credibility news and two out of five articles come from mainstream news sources, while in subsequent weeks four out of five articles come low credibility sources and one out of five come from mainstream news sources. Online Appendix A details the sources and selection process.Footnote 8

Figure 1 shows our weekly process. Each Tuesday morning, we select Monday’s most popular non-excluded article from each of the five source lists (see Online Appendix B for exclusion protocols).Footnote 9 Three professional fact-checkers classify each article, and we use the modal response as the final classification.Footnote 10 On Wednesday, respondents receive articles to evaluate by Friday, with language (Ukrainian or Russian) randomly assigned.

Figure 1. Timeline of survey each week.

Using this process, we then distribute a survey, which includes pre-treatment covariates and two stand-alone attention checks in the survey prior to experimental manipulation.Footnote 11 In this manner, respondents evaluate articles within 48–96 h of publication.Footnote 12

Given that we focus on belief in misinformation, we only use evaluations of articles rated as false/misleading by professional fact-checkers in the main analysis but use true articles and discernment in exploratory analyses. We leverage the random assignment of the language in which the story is read by respondents to assign article evaluations to a control or treatment group. Every evaluation of an article read in a respondent’s non-preferred language is a “treated” observation, while each evaluation of an article in a respondent’s preferred language is “not treated.” Table 1 displays the evaluations we consider in the treatment and control group among Ukrainian-preferring and Russian-preferring Ukrainians. Per the PAP, we debrief subjects on the veracity of the stories they see.Footnote 13

Table 1. Assignment of treatment by preferred language of respondent and language article is written in

We test each hypothesis using a 4-point ordinal scale from “not at all accurate” (1) to “very accurate” (4).Footnote 14

In a second experiment, we randomly present half of the respondents with “tips” to help spot false news stories in the language in which they selected to take the study before assessing the veracity of the articles.Footnote 15 These tips replicate Guess, Lerner, et al. (Reference Guess, Lerner, Lyons, Montgomery, Nyhan, Reifler and Sircar2020), who found that, in the United States and India, these tips decreased belief in misinformation and increased discernment between misinformation and mainstream news. We test whether these results replicate in a setting in which the information ecosystem is saturated with propaganda and information literacy programs. Online Appendix D contains the full set of tips.

Analysis & results

In presenting our results, we split the articles into those labeled true and those labeled false/misleading based on professional fact-checker evaluations. We report results in two ways: either by disaggregating the articles into those rated false/misleading by the professional fact-checkers and those rated true; or by measuring “discernment,” which is the likelihood that the treatment effect results in respondents becoming more likely to match the assessment of the professional fact-checkers.Footnote 16 To reiterate, we randomly assign everyone to read each article in Russian or Ukrainian, and then we ask them to evaluate the veracity of that article on a 4-point scale.

For our primary analyses, we use linear regression models with batch fixed effects, where the unit of analysis is each article evaluation by an individual. Our experiments include a primary treatment examining articles presented in respondents’ less-preferred language (H1–H2), and a secondary treatment testing exposure to a digital literacy intervention (H3), with the latter assigned at the individual rather than article level. For H2a and H2b, we examine interactions with attitudinal variables like distrust in central government and anti-Russian attitudes. For the digital literacy intervention, we also examine discernment by modeling ratings of true and false articles through an interaction between article veracity and treatment.Footnote 17 We include pre-registered covariates such as demographic variables and attention checks, and use HC2 robust standard errors, clustering at the individual level for analyses involving the literacy intervention.Footnote 18 For hypotheses involving subgroup analyses, we estimate separate models for Ukrainian and Russian language preferences. The tabular results for H1 and H3 are available in Online Appendix N. All of our model specifications and details are available in Online Appendix R and the PAP. We document the estimates for all exploratory analyses (including those presented in the body of the manuscript) in Online Appendix O and robustness checks in Online Appendix P.

Subjects are drawn from the research firm Info Sapien’s online panel and invited to participate in our survey administered via Qualtrics. 1,897 participants participated in our study (1,116 - Ukrainian-preferring, 781 - Russian-preferring).Footnote 19 Respondents each rated five of the 50 articles over the 10 weeks, the fact-checkers rated 37 (74%) as true, 10 (20%) as false/misleading, and 3 (6%) as “couldn’t determine.”Footnote 20 Of the 1,897 respondents, 1,325 rated at least one false/misleading story. Therefore, our data include (5 $ \times$ 1897 = 9,685 article observations, of which 1,869 false/misleading observations from 1,325 respondents are used in the primary analyses.

We find mixed evidence with respect to H1, as shown in Figure 2. Exposing subjects to misinformation in their less-preferred language reduces belief in these stories by .04 (S.E. = 0.05), which is .05 standard deviation units. While this is not statistically significant in our main specification (it is in our alternative specification, see Appendix Q), the data tell a much more nuanced story. When we estimate the effects separately for those who prefer Russian and Ukrainian, we find heterogeneous effects of reading the news in a non-preferred language.Footnote 21 Ukrainian-preferring respondents’ behavior aligns with H1: reading stories in Russian reduces their belief in misinformation (–0.18, S.E. = 0.05), which is 0.20 standard deviation units and statistically significant across model specifications. Russian-preferring respondents, however, contradict H1 and increase their belief in misinformation if the story is presented in Ukrainian (0.15, S.E. = .07), which is .18 SD units.Footnote 22 The difference between these two subgroups is large and statistically significant (–0.33, S.E. = .08), which is .25 SD units. This analysis, pre-registered as Exploratory Analysis 3, provides clear evidence that languages do not work symmetrically and that context may play a large role in how respondents react to a non-preferred language.

Figure 2. Effects of reading in less-preferred language. Note: Point estimates are shown with 95% confidence intervals from models controlling for covariates (per PAP). Appendix Table H1 contains unadjusted models.

However, exposing respondents to stories in their less-preferred language also reduces their belief in true stories (Exploratory analysis 2) by 0.09 on the four-point scale (S.E. = 0.02), which is a similar number of SD units (.10) as false stories but statistically significant, given the much larger sample size of true stories. Therefore, the coefficient on discernment (Exploratory analysis 7) overall is substantively null and not statistically significant (–0.04, S.E. = 0. 03), which is .05 SD units (as it also is for those who prefer Ukrainian). The same heterogeneous pattern that exists between Russian and Ukrainian-preferring respondents also exists for true stories, but the coefficient for true stories for Russian-preferring respondents is much smaller and not statistically significant, which means that discernment for Russian-referring respondents is negative (–.15, S.E. = .07) and statistically significant in this main specification (but not in others).

As shown in Figure 3, we find no substantive support for H2a and limited support for H2b. For H2a, among those who prefer Russian, the marginal effect of reading an article in Ukrainian is positive at low levels of distrust in the central government, which is the opposite of what the hypothesis predicted. Moreover, the trend is hard to estimate (and not statistically significant), given the small number of those who have high levels of trust in the central government. For H2b, the marginal effect of reading misinformation in Russian is negative and statistically significant among Ukrainian-preferring respondents when they have a more anti-Russian ideology and is not for those who are more pro-Russia. However, we have very few pro-Russian respondents, so the trend is hard to estimate and not statistically significant.

Figure 4 displays our results for our models testing Hypothesis 3. For all respondents, the average belief in misinformation is reduced by (–0.12. SE = .04) among all who we intended to treat with the “tips to spot false news” (Hypothesis 3a), which is .10 SD units. Moreover, the effect is similar between Russian and Ukrainian-preferring respondents. The similarity between Russian and Ukrainian-preferring respondents in relation to the digital media intervention is consistent with work that suggests persuasion often occurs similarly across many different types of groups (Coppock, Reference Coppock2023) and other research that shows that analytic thinking has the same correlation with belief in news for both Ukrainian and Russian respondents in Ukraine (Erlich et al., Reference Erlich, Garner, Pennycook and Rand2023). Unfortunately, the tips and tricks treatment also reduces belief in true articles, similar to false articles (Exploratory Analysis 2). This means that the tips and tricks did not affect discernment (.00, SE = .03), counter to the prediction in Hypothesis 3b. We also display Exploratory Analysis 9, our estimation of Conditional Average Complier Effects (CACE), where our indicator for the receipt of the treatment is if respondents can successfully answer at least two of the three follow-up comprehension questions about the tip we show theme.Footnote 23 These estimates highlight that the effects of the tips and tricks are more than double among compliers, though less precisely estimated.

Figure 3. H2: Conditional subgroup effects. (a) Marginal effects of evaluating false/misleading news articles in one’s less preferred language on Russian-preferring respondents across different levels of central government distrust. (b) Marginal effects of evaluating false/misleading news articles in one’s less preferred language on Ukrainian-preferring respondents across different levels of anti-Russian ideology. Note: The gray shaded area represents 95% confidence intervals. The vertical bars represent point estimates from a binning estimator (Hainmueller et al., Reference Hainmueller, Mummolo and Xu2019), dividing the data into terciles. There are only two bins in the left panel because 3 is the first and second tercile in the data. We reverse the Anti-Russia scale from the PAP to better align with hypothesis 2b.

Figure 4. Effects of Tips & Tricks intervention on belief in 1) false/misleading articles, 2) true articles, and 3) discernment between false/misleading and true articles. Note: Lines represent 95% confidence intervals. Per the PAP, these estimates are all from models adjusted for covariates. See Appendix Table H2 for unadjusted models.

For all three sets of hypotheses, we refer the reader to Online Appendix O for exploratory results not presented. As seen in Online Appendix P, in all cases but the categorical variable for the digital media intervention (Robustness checks 3),Footnote 24 our robustness checks validate the main findings from our hypotheses.

Discussion & conclusion

Overall and among those who prefer Ukrainian, we find that evaluating content in one’s non-preferred language reduces belief in false and true stories in equal measure and, therefore, does not help with discernment. Moreover, the effect on discernment may actually be negative for those who prefer Russian. This lack of improvement in discernment (Guay et al., Reference Guay, Berinsky, Pennycook and Rand2023) is conceptualized as being neutral from a normative perspective, and for those who prefer Russian, our results suggest a normatively negative effect on discernment.

Indeed, our estimates show that the dual reduction in the belief in both True and False news stories when evaluating content in one’s non-preferred language only occurs for those who prefer Ukrainian, as those who prefer Russisan either increase their belief or do not shift their evaluations, on average. One potential explanation for this heterogeneous effect is, as we discussed in the theory section, that respondents could use language to infer truthfulness. Therefore, individuals may have developed a heuristic associating Ukrainian-language content with greater factual accuracy in reporting than Russian-language content.

Recent scholarship has also documented a major shift in those who prefer Russian to Ukrainian in post-invasion Ukraine (Kulyk, Reference Kulyk2024). Therefore, our findings have important implications for the Ukrainian population. First, because many Ukrainians who used to prefer Russian now prefer Ukrainian, on average Ukrainians are likely to become more distrustful overall of Russian-language media sources, thereby making it harder for Russia and its allies to manipulate the Ukrainian population. Second, for those who continue to prefer Russian, forcing Ukrainian-language content on them will likely not have positive benefits in terms of resilience to misinformation because Russian-prefering Ukrainians already appear equally skeptical of Russian language content. Future research could also deepen these findings by probing in more detail whether there is variation in effects by topics or type of article.

Our findings also extend outside of Ukraine. One potentially important population in this regard is Spanish language users in the United States, where recent research has shown that Latinos who rely on Spanish language social media are more likely to believe false political narratives than those who rely on English language social media (Abrajano et al., Reference Abrajano, Garcia, Pope, Vidigal, Tucker and Nagler2024). Our findings could also extend to other post-Soviet countries where there is a fear of Russia, and the majority language is not Russian, but there are minority groups of ethnic Russians who prefer to speak Russian. Although our results may not extend to every context in which most individuals are bilingual, we believe they can be informative in many.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/XPS.2025.10011

Data availability

The data code and any additional materials (Erlich, Aslett, et al., Reference Erlich, Aslett, Graham and Tucker2025) required to replicate all analyses in this article are available at the Journal of Experimental Political Science Dataverse within the Harvard Dataverse Network, at https://doi.org/10.7910/DVN/YEAJ85.

Acknowledgements

We thank Rafael Campos-Gottardo for his research assistance help, and Sofiia Boklan, Dmytro Savchuk, and Valentyn Shurov for their help administering the project during challenging times.

Competing interests

The authors declare that there are no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding

We gratefully acknowledge that the Center for Social Media and Politics at New York University is supported by funding from the John S. and James L. Knight Foundation, the Charles Koch Foundation, Craig Newmark Philanthropies, the William and Flora Hewlett Foundation, and the Siegel Family Endowment. Aaron Erlich acknowledges support from the Social Sciences and Humanities Research Council (SSHRC) grant 435-2018-1354.

Ethics statement

The research adheres to the American Political Science Association’s (APSA) Principles and Guidance for Human Subjects Research. Research Approval was obtained from the New York University Institutional Review Board (IRB)(FY2020-4541) and McGill University Research Ethics Board (REB) (#23-03-078-04). Informed consent was obtained from all participants included in the study and all subjects were debriefed. Participants were compensated for their participation by the survey panel provider (i.e., Info sapiens), not directly by the researchers.

Footnotes

This article has earned badges for transparent research practices: Open Data, Open Materials, and Preregistered. For details see the Data Availability Statement.

1 There is a small, but growing, list of exceptions (e.g., Badrinathan, Reference Badrinathan2021; Mujani and Kuipers, Reference Mujani and Kuipers2020; Rosenzweig et al., Reference Rosenzweig, Bago, Berinsky and Rand2021).

2 We submitted our PAP for peer review to JEPS on January 17, 2022, just prior to Russia’s full-scale invasion of Ukraine. However, we suspended review of the study in the aftermath of the full-scale invasion. When social science research resumed after the invasion, we resubmitted a modified version of the PAP to JEPS. We then made three additional sets of updates to the PAP as the project unfolded. First, after each of two rounds of revise & resubmit, we updated the PAP to address the requests from peer reviewers. Second, we updated the PAP after conditional acceptance with editorial approval before our fieldwork, as a result of changed conditions related to the ongoing war. Third, after the first week of the study, we specified that we would sample more low-quality news outlets if we were not getting a sufficient number of false stories after the first two weeks of data collection (see Section K, page 32 of the PAP Appendix from 05–21–2024). All of the time-stamped versions of our PAP are available under files at https://osf.io/97mnj/?view_only=3e89348737ef4911904a4b7b92593902/. See Appendix S for information about the correspondence between the manuscript and the PAP.

3 Online Appendix F documents how we updated this measure due to the full-scale invasion.

4 See Online Appendix D for more information.

5 See the L2D-d program (https://bit.ly/3USM4du) and the Filter literacy project (https://bit.ly/3AUzcwx).

6 They are (1) examine the effects across respondents with different ideological perspectives, (2) test each hypothesis using evaluations of true articles, (3) test if the effects are different between those preferring the majority language or those preferring the minority language, (4) examine if the effects we find hold across all articles, (5) examine if the effects vary over time, (6) examine the effects across four major regions of Ukraine (Appendix I provides our regional classification.) (7) determine the effect of language on discernment using evaluations of true and false/misleading articles, (8) examine if the effects vary with emotion, and (9) test hypotheses 3a and 3b using a compliance average causal effect (CACE) model.

7 Appendix H further details our quota goals and our deviations.

8 Per the PAP, after two weeks, because we were not getting many false stories, we removed a high-quality, not-Anti-Russian source and replaced it with a low-quality, not-Anti-Russian source.

9 In view of the ongoing war in Ukraine, we exclude articles that were judged to have the potential to be harmful to readers; see Appendix B for more details.

10 Fact-checkers evaluate first to inform respondents of article veracity post-survey. Articles found in only one language undergo two-stage translation: one translator converts the text, then another reviews it.

11 Online Appendix G contains the text of the attention check questions.

12 Online Appendix J contains an example of how the article is viewed by respondents; Online Appendix K shows the full list of included and excluded articles.

13 See the debriefing protocol in Online Appendix C. While analyzing our findings, we became aware of a technical error that occurred during the debriefing for some participants on some of the articles in four of the ten weeks. Once we were aware of this error, we followed up with these respondents to provide the correct debriefing information; details about this protocol deviation can also be found in Appendix C.

14 Online Appendix E contains this commonly used wording (Guess and Munger, Reference Guess and Munger2020; Pennycook et al., Reference Pennycook, Cannon and Rand2018; Pennycook and Rand, Reference Pennycook and Rand2019) as well as the wording of robustness check 3 that uses a categorical measure.

15 We highlight that the importance of this addition goes beyond the scientific benefit of testing the tips in a new context. It is also important because we are cognizant that the benefits of a study should outweigh the costs for participants, and this is one way we could increase the benefits of the study for some of the participants. This is particularly the case in conflict areas where research impact on subjects should be especially careful not to be net negative (Cronin-Furman and Lake, Reference Cronin-Furman and Lake2018; Mazurana et al., Reference Mazurana, Jacobsen and Gale2013).

16 That is, if the professional fact-checkers rated the article as false/misleading, then “discernment” is higher if the treatment effect makes the respondent more likely to believe the article is false/misleading, and visa-versa for articles rated as true.

17 This method deviates from the method registered in the PAP (see Online Appendix E); however, it reflects current best practices in the literature on misinformation (see Guay et al., Reference Guay, Berinsky, Pennycook and Rand2022).

18 See Online Appendix F for more information on covariates and question wording. For all observations with covariate missing values, we use mean/mode imputation to replace the missing data with the mean/mode value of the observed data for that variable.

19 Online Appendix H contains information on our quotas; Online Appendix L shows descriptive statistics; Online Appendix M shows balance tables; Online Appendix S summarizes all study deviations from the PAP.

20 Our post-hoc reading of these “Couldn’t determine” stories is that they are generally false/misleading. Online Appendix Q contains analyses including these as false/misleading.

21 In the PAP, we stated that we would conduct the heterogeneous analysis only “if” we found overall results for H1. This was a mistake in the prose (it should have been “regardless if,”) and is not consistent with the rest of the pre-registration. Nevertheless, we note that this is a deviation in Appendix S.

22 The statistical significance of our Russian-preferring results varies by model specification – see Appendix Q.

23 CACE analysis assumes excludability: that being assigned to receive the media literacy tips does not directly influence respondents’ evaluation of stories, and the only way the random assignment affects outcomes is if people actually understand and absorb the content (measured by the comprehension questions).

24 As with many coarsened binary variables, the coarsening substantially reduces the information we have, and we do not find an effect.

References

Abrajano, Marisa, Garcia, Marianna, Pope, Aaron, Vidigal, Robert, Tucker, Joshua A., and Nagler, Jonathan. 2024. “How Reliance on Spanish-Language Social Media Predicts Beliefs in False Political Narratives amongst Latinos.” PNAS Nexus 3: 442.10.1093/pnasnexus/pgae442CrossRefGoogle ScholarPubMed
Allcott, Hunt, and Gentzkow, Matthew. 2017. “Social Media and Fake News in the 2016 Election.” Journal of Economic Perspectives 31: 211–36.10.1257/jep.31.2.211CrossRefGoogle Scholar
Ansaldo, Ana Inés, Marcotte, Karine, Scherer, Lilian, and Raboyeau, Gaelle. 2008. “Language Therapy and Bilingual Aphasia: Clinical Implications of Psycholinguistic and Neuroimaging Research.” Journal of Neurolinguistics 21: 539–57.10.1016/j.jneuroling.2008.02.001CrossRefGoogle Scholar
Badrinathan, Sumitra. 2021. “Educative Interventions to Combat Misinformation: Evidence from a Field Experiment in India.” American Political Science Review 115: 1325–41.10.1017/S0003055421000459CrossRefGoogle Scholar
Boroditsky, Lera. 2006. “Linguistic Relativity.” In Encyclopedia of Cognitive Science. London, UK: Nature Publishing Group.Google Scholar
Bronstein, Michael V., Pennycook, Gordon, Bear, Adam, Rand, David G., and Cannon, Tyrone D.. 2019. “Belief in Fake News is Associated with Delusionality, Dogmatism, Religious Fundamentalism, and Reduced Analytic Thinking.” Journal of Applied Research in Memory and Cognition 8: 108–17.10.1037/h0101832CrossRefGoogle Scholar
Clayton, Katherine, Blair, Spencer, Busam, Jonathan A., Forstner, Samuel, Glance, John, Green, Guy, Kawata, Anna, Kovvuri, Akhila, Martin, Jonathan, Morgan, Evan, Sandhu, Morgan, Sang, Rachel, Scholz-Bright, Rachel, Welch, Austin T., Wolff, Andrew G., Zhou, Amanda, and Nyhan, Brendan. 2020. “Real Solutions for Fake News? Measuring the Effectiveness of General Warnings and Fact-Check Tags in Reducing Belief in False Stories on Social Media.” Political Behavior 42: 1073–95.10.1007/s11109-019-09533-0CrossRefGoogle Scholar
Coppock, Alexander. 2023. Persuasion in Parallel: How Information Changes Minds about Politics. Chicago: University of Chicago Press.Google Scholar
Corey, Joanna D., Hayakawa, Sayuri, Foucart, Alice, Aparici, Melina, Botella, Juan, Costa, Albert, and Keysar, Boaz. 2017. “Our Moral Choices are Foreign to Us.” Journal of Experimental Psychology: Learning, Memory, and Cognition 43: 1109.Google ScholarPubMed
Costa, Albert, Vives, Marc–Lluís, and Corey, Joanna D.. 2017. “On Language Processing Shaping Decision Making.” Current Directions in Psychological Science 26: 146–51.10.1177/0963721416680263CrossRefGoogle Scholar
Cronin-Furman, Kate, and Lake, Milli. 2018. “Ethics Abroad: Fieldwork in Fragile and Violent Contexts.” PS: Political Science & Politics 51: 607–14.Google Scholar
Dragojlovic, Nick. 2015. “Listening to Outsiders: The Impact of Messenger Nationality on Transnational Persuasion in the United States.” International Studies Quarterly 59: 7385.10.1111/isqu.12179CrossRefGoogle Scholar
Erlich, Aaron, Aslett, Kevin, Graham, Sarah, and Tucker, Joshua. 2025. “Replication Data for: How Language Shapes Belief in Misinformation: A Study Among Multilinguals in Ukraine.” Harvard Dataverse. https://doi.org/10.7910/DVN/YEAJ85.CrossRefGoogle Scholar
Erlich, Aaron, and Garner, Calvin. 2021. “Is pro-Kremlin Disinformation Effective? Evidence from Ukraine.” The International Journal of Press/Politics 26: 181201.Google Scholar
Erlich, Aaron, Garner, Calvin, Pennycook, Gordon, and Rand, David G.. 2023. “Does Analytic Thinking Insulate Against Pro- Kremlin Disinformation? Evidence From Ukraine.” Political Psychology 44: 7994.10.1111/pops.12819CrossRefGoogle Scholar
Flores, Alejandro, and Coppock, Alexander. 2018. “Do Bilinguals Respond more Favorably to Candidate Advertisements in English or in Spanish?Political Communication 35: 612–33.10.1080/10584609.2018.1426663CrossRefGoogle Scholar
Grosjean, François. 2010. Bilingual. Cambridge, MA: Harvard university press.10.4159/9780674056459CrossRefGoogle Scholar
Guay, Brian, Berinsky, Adam J., Pennycook, Gordon, and Rand, David G.. 2022. “Examining Partisan Asymmetries in Fake News Sharing and the Efficacy of Accuracy Prompt Interventions.” PsyArXiv Preprints. https://osf.io/preprints/psyarxiv/y762k_v1 10.31234/osf.io/y762kCrossRefGoogle Scholar
Guay, Brian, Berinsky, Adam J., Pennycook, Gordon, and Rand, David. 2023. “How to Think about Whether Misinformation Interventions Work.” Nature Human Behaviour 7: 1231–33.10.1038/s41562-023-01667-wCrossRefGoogle ScholarPubMed
Guess, Andrew, Lerner, Michael, Lyons, Benjamin, Montgomery, Jacob M., Nyhan, Brendan, Reifler, Jason, and Sircar, Neelanjan. 2020. “A Digital Media Literacy Intervention Increases Discernment between Mainstream and False News in the United States and India.” Proceedings of the National Academy of Sciences 117: 15536–45.10.1073/pnas.1920498117CrossRefGoogle ScholarPubMed
Guess, Andrew, and Munger, Kevin. 2020. “Digital Literacy and Online Political Behavior.” Charlottesville: OSF Preprints. Retrieved April 13: 2020.Google Scholar
Hainmueller, Jens, Mummolo, Jonathan, and Xu, Yiqing. 2019. “How much should We Trust Estimates from Multiplicative Interaction Models? Simple Tools to Improve Empirical Practice.” Political Analysis 27: 163–92.10.1017/pan.2018.46CrossRefGoogle Scholar
Ismail, Jamal Abdi, and Deane, James. 2008. “The 2007 General Election in Kenya and Its Aftermath: The Role of Local Language Media.” The International Journal of Press/Politics 13: 319–27.10.1177/1940161208319510CrossRefGoogle Scholar
Keshishian, Flora. 2000. “Acculturation, Communication, and the US Mass Media: The Experience of an Iranian Immigrant.” Howard Journal of Communications 11: 93106.10.1080/106461700246643CrossRefGoogle Scholar
Keysar, Boaz, Hayakawa, Sayuri L., and An, Sun Gyu. 2012. “The Foreign-Language Effect: Thinking in a Foreign Tongue Reduces Decision Biases.” Psychological Science 23: 661–68.10.1177/0956797611432178CrossRefGoogle Scholar
Koltsova, Olessia, and Pashakhin, Sergei. 2020. “Agenda Divergence in a Developing Conflict: Quantitative Evidence from Ukrainian and Russian TV Newsfeeds.” Media, War & Conflict 13: 237–57.10.1177/1750635219829876CrossRefGoogle Scholar
Kuklinski, James H., Quirk, Paul J., Jerit, Jennifer, Schwieder, David, and Rich, Robert F.. 2000. “Misinformation and the Currency of Democratic Citizenship.” The Journal of Politics 62: 790816.10.1111/0022-3816.00033CrossRefGoogle Scholar
Kulyk, Volodymyr. 2024. “Language Shift in Time of War: The Abandonment of Russian in Ukraine.” Post-Soviet Affairs 40: 159–74.10.1080/1060586X.2024.2318141CrossRefGoogle Scholar
Lau, Richard R, Andersen, David J., Ditonto, Tessa M., Kleinberg, Mona S., and Redlawsk, David P.. 2017. “Effect of Media Environment Diversity and Advertising Tone on Information Search, Selective Exposure, and Affective Polarization.” Political Behavior 39: 231–55.10.1007/s11109-016-9354-8CrossRefGoogle Scholar
Martel, Cameron, Pennycook, Gordon, and Rand, David G.. 2020. “Reliance on Emotion Promotes Belief in Fake News.” Cognitive Research: Principles and Implications 5: 120.Google ScholarPubMed
Mastro, Dana. 2009. “Racial/Ethnic Stereotyping and the Media.” In Media Processes and Effects, ed. Bryant, J. and Oliver, M.B.. New York: Routledge.Google Scholar
Mazurana, Dyan, Jacobsen, Karen, and Gale, Lacey Andrews. 2013. Research Methods in Conflict Settings: A View from Below. Cambridge: Cambridge University Press.10.1017/CBO9781139811910CrossRefGoogle Scholar
Muda, Rafał, Pennycook, Gordon, Hamerski, Damian, and Białek, Michał. 2023. “People Are Worse at Detecting Fake News in Their Foreign Language.” Journal of Experimental Psychology: Applied 29: 712–24.Google ScholarPubMed
Mujani, Saiful, and Kuipers, Nicholas. 2020. “Who Believed Misinformation during the 2019 Indonesian Election?Asian Survey 60: 1029–43.10.1525/as.2020.60.6.1029CrossRefGoogle Scholar
Onguny, Philip. 2019. “Electoral Violence in Kenya 2007–2008 - the Role of Vernacular Radio.” Journal of African Elections 18: 86107.10.20940/JAE/2019/v18i1a5CrossRefGoogle Scholar
Pasitselska, Olga. 2017. “Ukrainian Crisis through the Lens of Russian Media: Construction of Ideological Discourse.” Discourse & Communication 11: 591609.10.1177/1750481317714127CrossRefGoogle Scholar
Pennycook, Gordon, Cannon, Tyrone D., and Rand, David G.. 2018. “Prior Exposure Increases Perceived Accuracy of Fake News.” Journal of Experimental Psychology: General 147: 1865.10.1037/xge0000465CrossRefGoogle ScholarPubMed
Pennycook, Gordon, and Rand, David G.. 2019. “Lazy, Not Biased: Susceptibility to Partisan Fake News is Better Explained by Lack of Reasoning than by Motivated Reasoning.” Cognition 188: 3950.10.1016/j.cognition.2018.06.011CrossRefGoogle ScholarPubMed
Pennycook, Gordon, and Rand, David G.. 2020. “Who Falls for Fake News? The Roles of Bullshit Receptivity, Overclaiming, Familiarity, and Analytic Thinking.” Journal of Personality 88: 185200.10.1111/jopy.12476CrossRefGoogle Scholar
Pérez, Efrén O, and Tavits, Margit. 2019. “Language Heightens the Political Salience of Ethnic Divisions.” Journal of Experimental Political Science 6: 131–40.10.1017/XPS.2018.27CrossRefGoogle Scholar
PwC-UK. 2016. The Rise of Cross-Border News. London: PricewaterhouseCoopers LLP Google Scholar
Rosenzweig, Leah R., Bago, Bence, Berinsky, Adam J., and Rand, David G.. 2021. “Happiness and surprise are associated with worse truth discernment of COVID-19 headlines among social media users in Nigeria.” Harvard Kennedy School Misinformation Review 2: 137.Google Scholar
Ross, Michael, Xun, W. Q. Elaine, and Wilson, Anne E.. 2002. “Language and the Bicultural Self.” Personality and Social Psychology Bulletin 28: 1040–50.10.1177/01461672022811003CrossRefGoogle Scholar
Somerville, Keith. 2009. “British Media Coverage of the Post-Election Violence in Kenya, 2007–08.” Journal of Eastern African Studies 3: 526–42.10.1080/17531050903273776CrossRefGoogle Scholar
Starbird, Kate, Dailey, Dharma, Mohamed, Owla, Lee, Gina, and Spiro, Emma S.. 2018. “Engage Early, Correct More: How Journalists Participate in False Rumors Online During Crisis Events.” In: Proceedings of the 2018 Chi Conference on Human Factors in Computing Systems, pp. 1–12.Google Scholar
Stewart, Andrew, Plotkin, Joshua, and McCarty, Nolan. 2021. “Inequality, Identity, and Partisanship: How Redistribution can Stem the Tide of Mass Polarization.” Proceedings of the National Academy of Sciences 118: e2102139118.10.1073/pnas.2102140118CrossRefGoogle Scholar
StratCom. 2015. Analysis of Russias Information Campaign against Ukraine. Riga, Latvia: NATO StratCom Centre of Excellence.Google Scholar
Suhay, Elizabeth, Bello-Pardo, Emily, and Maurer, Brianna. 2018. “The Polarizing Effects of Online Partisan Criticism: Evidence from Two Experiments.” The International Journal of Press/Politics 23: 95115.10.1177/1940161217740697CrossRefGoogle Scholar
Sundar, S. Shyam, and Nass, Clifford. 2001. “Conceptualizing Sources in Online News.” Journal of Communication 51: 5272.10.1111/j.1460-2466.2001.tb02872.xCrossRefGoogle Scholar
Szostek, Joanna. 2017. “The Power and Limits of Russia’s Strategic Narrative in Ukraine: The Role of Linkage.” Perspectives on Politics 15: 379–95.10.1017/S153759271700007XCrossRefGoogle Scholar
Szostek, Joanna. 2018. “Nothing is True? The Credibility of News and Conflicting Narratives During “Information War” in Ukraine.” The International Journal of Press/Politics 23: 116–35.10.1177/1940161217743258CrossRefGoogle Scholar
Trafimow, David, Silverman, Ellen S., Fan, Ruth Mei-Tai, and Law, Josephine Shui Fun. 1997. “The Effects of Language and Priming on the Relative Accessibility of the Private Self and the Collective Self.” Journal of Cross-Cultural Psychology 28: 107–23.10.1177/0022022197281007CrossRefGoogle Scholar
Tsfati, Yariv, and Peri, Yoram. 2006. “Mainstream Media Skepticism and Exposure to Sectorial and Extranational News Media: The Case of Israel.” Mass Communication & Society 9: 165–87.10.1207/s15327825mcs0902_3CrossRefGoogle Scholar
Tukachinsky, Riva, Mastro, Dana, and Yarchi, Moran. 2015. “Documenting Portrayals of Race/Ethnicity on Primetime Television Over a 20-Year Span and Their Association with National-Level Racial/Ethnic Attitudes.” Journal of Social Issues 71: 1738.10.1111/josi.12094CrossRefGoogle Scholar
Vihalemm, Triin, Juzefovičs, Jānis, and Leppik, Marianne. 2019. “Identity and Media-Use Strategies of the Estonian and Latvian Russian-Speaking Populations Amid Political Crisis.” Europe-Asia Studies 71: 4870.10.1080/09668136.2018.1533916CrossRefGoogle Scholar
Vosoughi, Soroush, Roy, Deb, and Aral, Sinan. 2018. “The Spread of True and False News Online.” Science 359: 1146–51.10.1126/science.aap9559CrossRefGoogle Scholar
Figure 0

Figure 1. Timeline of survey each week.

Figure 1

Table 1. Assignment of treatment by preferred language of respondent and language article is written in

Figure 2

Figure 2. Effects of reading in less-preferred language. Note: Point estimates are shown with 95% confidence intervals from models controlling for covariates (per PAP). Appendix Table H1 contains unadjusted models.

Figure 3

Figure 3. H2: Conditional subgroup effects. (a) Marginal effects of evaluating false/misleading news articles in one’s less preferred language on Russian-preferring respondents across different levels of central government distrust. (b) Marginal effects of evaluating false/misleading news articles in one’s less preferred language on Ukrainian-preferring respondents across different levels of anti-Russian ideology. Note: The gray shaded area represents 95% confidence intervals. The vertical bars represent point estimates from a binning estimator (Hainmueller et al., 2019), dividing the data into terciles. There are only two bins in the left panel because 3 is the first and second tercile in the data. We reverse the Anti-Russia scale from the PAP to better align with hypothesis 2b.

Figure 4

Figure 4. Effects of Tips & Tricks intervention on belief in 1) false/misleading articles, 2) true articles, and 3) discernment between false/misleading and true articles. Note: Lines represent 95% confidence intervals. Per the PAP, these estimates are all from models adjusted for covariates. See Appendix Table H2 for unadjusted models.

Supplementary material: File

Erlich et al. supplementary material

Erlich et al. supplementary material
Download Erlich et al. supplementary material(File)
File 2.2 MB
Supplementary material: Link

Erlich et al. Dataset

Link