Introduction
Our cumulative knowledge about belief in misinformation predominantly comes from surveying English-speaking Americans about misinformation written in English from American media sources (Allcott and Gentzkow, Reference Allcott and Gentzkow2017; Clayton et al., Reference Clayton, Blair, Busam, Forstner, Glance, Green, Kawata, Kovvuri, Martin, Morgan, Sandhu, Sang, Scholz-Bright, Welch, Wolff, Zhou and Nyhan2020; Pennycook and Rand, Reference Pennycook and Rand2020; Pennycook et al., Reference Pennycook, Cannon and Rand2018).Footnote 1 However, the global media environment is complexly multilingual. Half of the global population uses two or more languages or dialects in their daily life (Ansaldo et al., Reference Ansaldo, Marcotte, Scherer and Raboyeau2008; Grosjean, Reference Grosjean2010) and, therefore, likely consumes media, including misinformation, in multiple languages from both within and outside of their borders. As the consumption of cross-border media increases and multilingual media with it (PwC-UK, 2016), the distribution of false or misleading news in different languages poses substantial political consequences. Reporting news in different languages can differentially mobilize populations (Onguny, Reference Onguny2019), sometimes based on false or misleading evidence that escalates political violence (Ismail and Deane, Reference Ismail and Deane2008). Differential belief in misinformation strengthens ethnopolitical divides (Somerville, Reference Somerville2009) and has been linked to increased levels of affective polarization (Lau et al., Reference Lau, Andersen, Ditonto, Kleinberg and Redlawsk2017; Stewart et al., Reference Stewart, Plotkin and McCarty2021; Suhay et al., Reference Suhay, Bello-Pardo and Maurer2018), which can weaken the foundations of liberal democracy (Kuklinski et al., Reference Kuklinski, Quirk, Jerit, Schwieder and Rich2000). Given both these potentially troubling consequences for democracy and the rise of multilingual citizens who consume news in different languages in a single media market, it is imperative to develop a more comprehensive understanding of how news consumers assess the veracity of news in different languages. In this registered report, we conduct a study that addresses the broad research question: Are individuals more or less susceptible to misinformation in their non-preferred language?
Previous research on language and belief in misinformation has focused on identifying a language proficiency effect, but we propose that language may affect belief in misinformation even if individuals are similarly proficient in both languages. For example, language may signal credibility or elicit an emotional response, which may, in turn, affect belief in misinformation written in that language. A linguistic minority may be more skeptical of news written in a different language used by the majority because they have become distrustful of media that communicate using that language, or because of tensions with the majority linguistic group. Cues unique to a language and context, such as these, may explain why past studies measuring the effect of language proficiency on belief in misinformation have reported inconsistent results across different languages. Rather than a proficiency effect, we test whether there is a general language cue that affects even those who are equally proficient in their less-preferred language. Since previous research has found that emotion affects belief in false news, but not true news (Martel et al., Reference Martel, Pennycook and Rand2020), we focus on belief in misinformation; however, we also perform exploratory analysis measuring the effect of language on belief in true news.
In this registered report, based on a peer-reviewed pre-analysis plan (PAP),Footnote 2 we test whether language cues affect belief in misinformation in a country where most individuals are similarly proficient in two languages. To determine whether or not language indeed affects belief in misinformation regardless of proficiency level, we conduct a survey experiment in Ukraine in which we randomly assign the language (either Ukrainian or Russian) of news articles to bilingual respondents, regardless of which language they prefer, in the days immediately after the publication of an article. Our primary research question is: Are multilingual individuals more skeptical of misinformation produced in their less preferred language? We secondarily test a tips and tricks intervention used in other contexts to see if previously reported positive effects hold in the Ukrainian context.
Theory and hypotheses
The preponderance of work on language and misinformation has focused on identifying a language proficiency effect by investigating two modes of cognition: an effortless mode based on heuristics and a more reflective mode based on deliberation (Costa et al., Reference Costa, Vives and Corey2017; Corey et al., Reference Corey, Hayakawa, Foucart, Aparici, Botella, Costa and Keysar2017; Keysar et al., Reference Keysar, Hayakawa and An2012; Muda et al., Reference Muda, Pennycook, Hamerski and Białek2023). These studies’ inconsistent results suggest that reading news in one’s less proficient language may not have the same effect across languages and contexts. Language influences information processing through multiple mechanisms: as a source cue affecting perceived credibility (Dragojlovic, Reference Dragojlovic2015; Sundar and Nass, Reference Sundar and Nass2001), through cultural and cognitive priming effects on bilinguals (Boroditsky, Reference Boroditsky2006; Ross et al., Reference Ross, Xun and Wilson2002; Trafimow et al., Reference Trafimow, Silverman, Fan and Law1997), and by activating political identities (Flores and Coppock, Reference Flores and Coppock2018; Pérez and Tavits, Reference Pérez and Tavits2019). These effects are context-dependent – in Ukraine, we expect both majority and minority language groups may be less likely to believe misinformation in their less-preferred language, though for different reasons.
Within a country, minority groups with distinct languages can be skeptical of news that is written in the language used by the majority because mainstream news in the majority language often portrays minority groups in a negative light (Keshishian, Reference Keshishian2000; Mastro, Reference Mastro, Bryant and Oliver2009; Tukachinsky et al., Reference Tukachinsky, Mastro and Yarchi2015). This skepticism can push those in the linguistic minority to consume sectoral or extranational media that often use different languages (Tsfati and Peri, Reference Tsfati and Peri2006). Minority groups’ divergence in trust can gain prominence during periods of crisis (Vihalemm et al., Reference Vihalemm, Juzefovičs and Leppik2019). Indeed, it is likely that during a crisis, mainstream news in the majority language can promote separate narratives that alienate minorities who are already skeptical of news in the majority language. Therefore, in such situations, it is possible that those who prefer a minority language are less likely to believe misinformation if it is written in the majority language (i.e., their less-preferred language). In Ukraine, we can test if this is indeed the case by surveying those who prefer Russian and measuring the effect that reading misinformation in their less-preferred language, Ukrainian, has on belief in that misinformation during a crisis (Russia’s 2022 full-scale invasion of Ukraine). Although the status of the Russian language is changing quickly in Ukraine and is a debated question, in our pilot, we found that almost the entire population of Ukraine reports high proficiency in both languages. Given their minority status in Ukraine, we expect those who are Russian-preferring in Ukraine to be more skeptical of misinformation written in Ukrainian than in Russian.
International conditions can also create language cues for majority language users, particularly when a foreign power uses a minority language for disinformation campaigns (StratCom, 2015). In Ukraine, Ukrainian-preferring news consumers may associate Russian-language news with foreign actors, potentially reducing belief in that information – a phenomenon partially supported by observational research showing Ukrainian language users are less likely to believe pro-Kremlin disinformation (Erlich and Garner, Reference Erlich and Garner2021).
First, given the literature, we test hypothesis H1:
H1 Individuals are less likely to believe a false/misleading article written in their less preferred language than in their more preferred language.
However, we have posited different mechanisms for belief in misinformation depending on whether an individual prefers to speak a minority or majority language. Therefore, we investigate whether there is support for our hypotheses among minority and majority users in the country. To do so, we test H1 with solely Russian-preferring respondents and separately with solely Ukrainian-preferring respondents, which are subgroup tests of H1.
Second, while we cannot randomly assign mistrust of the central government, we can test some correlational observable implications of our causal mechanism by hypothesizing that among those who prefer Russian, as their distrust in the central government increases, they will be less likely to believe misinformation in Ukrainian relative to Russian.
H2a Among those who prefer the Russian language, the negative marginal effect of reading news in Ukrainian (relative to Russian) on belief in misinformation will be greater as distrust of the central government increases.
Third, we posited that among those who prefer Ukrainian, as animus towards Russia increases,Footnote 3 they will believe misinformation in Russian less, relative to misinformation in Ukrainian. Again, we cannot randomly assign animus (nor should we), but we can examine correlational support for our mechanism. Hence, we test:
H2b Among those who prefer the Ukrainian language, the negative marginal effect of reading news in Russian (relative to Ukrainian) on belief in misinformation will be greater as animus towards Russia increases.
In addition to testing the effect of language on belief in misinformation, Ukraine offers an opportunity to test the effectiveness of popular media literacy interventions on media audiences subject to massive information literacy programs in a propaganda-saturated information environment. To this end, we set out to measure the effectiveness of one of the most popular media literacy interventions, Facebook’s “Tips to Spot False News” in Ukraine.Footnote 4 Since Russian initially invaded Ukraine in 2014, Ukraine’s information environment has become saturated with disinformation (Pasitselska, Reference Pasitselska2017; Szostek, Reference Szostek2017) and divergent interpretations of events (Koltsova and Pashakhin, Reference Koltsova and Pashakhin2020; Szostek, Reference Szostek2018). In response, information literacy courses designed to inoculate individuals from misinformation have become popular and integrated into the curriculum in public schools in Ukraine.Footnote 5
Previously, Facebook’s “Tips to Spot False News” was tested in countries where propaganda and information literacy programs were not prevalent (the United States and India in 2019), which leaves us uncertain as to its effectiveness in areas where literacy programs and propaganda have become quite prevalent, such as Ukraine. Therefore, we test the following hypotheses:
H3a When individuals read the “tips to spot false news,” they will be less likely to believe a false/misleading article than those who do not read these tips.
H3b When individuals read the “tips to spot false news,” they will be more likely to correctly discern between a false/misleading article and a true article than those who do not read these tips.
We also pre-registered nine secondary exploratory analyses, for which, due to space constraints, we mainly present the results in Online Appendix O (although some are referenced in the main text).Footnote 6
Research design
We conducted a 10-week survey experiment in Ukraine, from May 15 to July 19, 2024, to test our hypotheses. Ukraine is an ideal case to test our hypotheses because most of its citizens are bilingual news consumers in Ukrainian and Russian. Each week, we used quota sampling to ensure geographic, ideological, and linguistic variation. While the full-scale invasion impacted all of Ukraine to various degrees, two regions were hardest hit: the East and the South, which also have historically had the most Ukrainians who use Russian as a primary language (although this has begun to change in recent years). We met our quotas in the East; however, we ultimately did not reach our full quota in the South. Further research could investigate how war-affectedness correlates with our treatment effects, although Appendix O.6 shows that, for this study, the coefficients from the South are similar to our overall estimates’ effect sizes.Footnote 7
Previous research measuring belief in misinformation has yet to integrate important findings about how individuals consume misinformation, limiting inference from these studies. Specifically, misinformation is consumed very quickly after publication (Starbird et al., Reference Starbird, Dailey, Mohamed, Lee and Spiro2018; Vosoughi et al., Reference Vosoughi, Roy and Aral2018), but most research asks respondents to evaluate months- or years-old fact-checked misinformation (Bronstein et al., Reference Bronstein, Pennycook, Bear, Rand and Cannon2019; Clayton et al., Reference Clayton, Blair, Busam, Forstner, Glance, Green, Kawata, Kovvuri, Martin, Morgan, Sandhu, Sang, Scholz-Bright, Welch, Wolff, Zhou and Nyhan2020; Pennycook and Rand, Reference Pennycook and Rand2020). To address this limitation, we create a transparent, replicable, and pre-registered news article selection process that sources popular false/misleading and true articles within 24 h of their publication and subsequently distributes the full articles for evaluation to respondents in Ukraine. Our respondents evaluate these popular articles within 48–96 h of publication. This process ensures that we are measuring the effect of language on belief in popular misinformation in the time period that individuals are most likely to consume this misinformation. Our method also reduces researcher bias in article selection.
For each of the study’s 10 weeks, we collect and distribute a new group of five articles for each respondent to evaluate in randomized order. In the first two weeks, three out of five of these articles come from political websites known to produce low-credibility news and two out of five articles come from mainstream news sources, while in subsequent weeks four out of five articles come low credibility sources and one out of five come from mainstream news sources. Online Appendix A details the sources and selection process.Footnote 8
Figure 1 shows our weekly process. Each Tuesday morning, we select Monday’s most popular non-excluded article from each of the five source lists (see Online Appendix B for exclusion protocols).Footnote 9 Three professional fact-checkers classify each article, and we use the modal response as the final classification.Footnote 10 On Wednesday, respondents receive articles to evaluate by Friday, with language (Ukrainian or Russian) randomly assigned.

Figure 1. Timeline of survey each week.
Using this process, we then distribute a survey, which includes pre-treatment covariates and two stand-alone attention checks in the survey prior to experimental manipulation.Footnote 11 In this manner, respondents evaluate articles within 48–96 h of publication.Footnote 12
Given that we focus on belief in misinformation, we only use evaluations of articles rated as false/misleading by professional fact-checkers in the main analysis but use true articles and discernment in exploratory analyses. We leverage the random assignment of the language in which the story is read by respondents to assign article evaluations to a control or treatment group. Every evaluation of an article read in a respondent’s non-preferred language is a “treated” observation, while each evaluation of an article in a respondent’s preferred language is “not treated.” Table 1 displays the evaluations we consider in the treatment and control group among Ukrainian-preferring and Russian-preferring Ukrainians. Per the PAP, we debrief subjects on the veracity of the stories they see.Footnote 13
Table 1. Assignment of treatment by preferred language of respondent and language article is written in

We test each hypothesis using a 4-point ordinal scale from “not at all accurate” (1) to “very accurate” (4).Footnote 14
In a second experiment, we randomly present half of the respondents with “tips” to help spot false news stories in the language in which they selected to take the study before assessing the veracity of the articles.Footnote 15 These tips replicate Guess, Lerner, et al. (Reference Guess, Lerner, Lyons, Montgomery, Nyhan, Reifler and Sircar2020), who found that, in the United States and India, these tips decreased belief in misinformation and increased discernment between misinformation and mainstream news. We test whether these results replicate in a setting in which the information ecosystem is saturated with propaganda and information literacy programs. Online Appendix D contains the full set of tips.
Analysis & results
In presenting our results, we split the articles into those labeled true and those labeled false/misleading based on professional fact-checker evaluations. We report results in two ways: either by disaggregating the articles into those rated false/misleading by the professional fact-checkers and those rated true; or by measuring “discernment,” which is the likelihood that the treatment effect results in respondents becoming more likely to match the assessment of the professional fact-checkers.Footnote 16 To reiterate, we randomly assign everyone to read each article in Russian or Ukrainian, and then we ask them to evaluate the veracity of that article on a 4-point scale.
For our primary analyses, we use linear regression models with batch fixed effects, where the unit of analysis is each article evaluation by an individual. Our experiments include a primary treatment examining articles presented in respondents’ less-preferred language (H1–H2), and a secondary treatment testing exposure to a digital literacy intervention (H3), with the latter assigned at the individual rather than article level. For H2a and H2b, we examine interactions with attitudinal variables like distrust in central government and anti-Russian attitudes. For the digital literacy intervention, we also examine discernment by modeling ratings of true and false articles through an interaction between article veracity and treatment.Footnote 17 We include pre-registered covariates such as demographic variables and attention checks, and use HC2 robust standard errors, clustering at the individual level for analyses involving the literacy intervention.Footnote 18 For hypotheses involving subgroup analyses, we estimate separate models for Ukrainian and Russian language preferences. The tabular results for H1 and H3 are available in Online Appendix N. All of our model specifications and details are available in Online Appendix R and the PAP. We document the estimates for all exploratory analyses (including those presented in the body of the manuscript) in Online Appendix O and robustness checks in Online Appendix P.
Subjects are drawn from the research firm Info Sapien’s online panel and invited to participate in our survey administered via Qualtrics. 1,897 participants participated in our study (1,116 - Ukrainian-preferring, 781 - Russian-preferring).Footnote
19
Respondents each rated five of the 50 articles over the 10 weeks, the fact-checkers rated 37 (74%) as true, 10 (20%) as false/misleading, and 3 (6%) as “couldn’t determine.”Footnote
20
Of the 1,897 respondents, 1,325 rated at least one false/misleading story. Therefore, our data include (5
$ \times$
1897 = 9,685 article observations, of which 1,869 false/misleading observations from 1,325 respondents are used in the primary analyses.
We find mixed evidence with respect to H1, as shown in Figure 2. Exposing subjects to misinformation in their less-preferred language reduces belief in these stories by .04 (S.E. = 0.05), which is .05 standard deviation units. While this is not statistically significant in our main specification (it is in our alternative specification, see Appendix Q), the data tell a much more nuanced story. When we estimate the effects separately for those who prefer Russian and Ukrainian, we find heterogeneous effects of reading the news in a non-preferred language.Footnote 21 Ukrainian-preferring respondents’ behavior aligns with H1: reading stories in Russian reduces their belief in misinformation (–0.18, S.E. = 0.05), which is 0.20 standard deviation units and statistically significant across model specifications. Russian-preferring respondents, however, contradict H1 and increase their belief in misinformation if the story is presented in Ukrainian (0.15, S.E. = .07), which is .18 SD units.Footnote 22 The difference between these two subgroups is large and statistically significant (–0.33, S.E. = .08), which is .25 SD units. This analysis, pre-registered as Exploratory Analysis 3, provides clear evidence that languages do not work symmetrically and that context may play a large role in how respondents react to a non-preferred language.

Figure 2. Effects of reading in less-preferred language. Note: Point estimates are shown with 95% confidence intervals from models controlling for covariates (per PAP). Appendix Table H1 contains unadjusted models.
However, exposing respondents to stories in their less-preferred language also reduces their belief in true stories (Exploratory analysis 2) by 0.09 on the four-point scale (S.E. = 0.02), which is a similar number of SD units (.10) as false stories but statistically significant, given the much larger sample size of true stories. Therefore, the coefficient on discernment (Exploratory analysis 7) overall is substantively null and not statistically significant (–0.04, S.E. = 0. 03), which is .05 SD units (as it also is for those who prefer Ukrainian). The same heterogeneous pattern that exists between Russian and Ukrainian-preferring respondents also exists for true stories, but the coefficient for true stories for Russian-preferring respondents is much smaller and not statistically significant, which means that discernment for Russian-referring respondents is negative (–.15, S.E. = .07) and statistically significant in this main specification (but not in others).
As shown in Figure 3, we find no substantive support for H2a and limited support for H2b. For H2a, among those who prefer Russian, the marginal effect of reading an article in Ukrainian is positive at low levels of distrust in the central government, which is the opposite of what the hypothesis predicted. Moreover, the trend is hard to estimate (and not statistically significant), given the small number of those who have high levels of trust in the central government. For H2b, the marginal effect of reading misinformation in Russian is negative and statistically significant among Ukrainian-preferring respondents when they have a more anti-Russian ideology and is not for those who are more pro-Russia. However, we have very few pro-Russian respondents, so the trend is hard to estimate and not statistically significant.
Figure 4 displays our results for our models testing Hypothesis 3. For all respondents, the average belief in misinformation is reduced by (–0.12. SE = .04) among all who we intended to treat with the “tips to spot false news” (Hypothesis 3a), which is .10 SD units. Moreover, the effect is similar between Russian and Ukrainian-preferring respondents. The similarity between Russian and Ukrainian-preferring respondents in relation to the digital media intervention is consistent with work that suggests persuasion often occurs similarly across many different types of groups (Coppock, Reference Coppock2023) and other research that shows that analytic thinking has the same correlation with belief in news for both Ukrainian and Russian respondents in Ukraine (Erlich et al., Reference Erlich, Garner, Pennycook and Rand2023). Unfortunately, the tips and tricks treatment also reduces belief in true articles, similar to false articles (Exploratory Analysis 2). This means that the tips and tricks did not affect discernment (.00, SE = .03), counter to the prediction in Hypothesis 3b. We also display Exploratory Analysis 9, our estimation of Conditional Average Complier Effects (CACE), where our indicator for the receipt of the treatment is if respondents can successfully answer at least two of the three follow-up comprehension questions about the tip we show theme.Footnote 23 These estimates highlight that the effects of the tips and tricks are more than double among compliers, though less precisely estimated.

Figure 3. H2: Conditional subgroup effects. (a) Marginal effects of evaluating false/misleading news articles in one’s less preferred language on Russian-preferring respondents across different levels of central government distrust. (b) Marginal effects of evaluating false/misleading news articles in one’s less preferred language on Ukrainian-preferring respondents across different levels of anti-Russian ideology. Note: The gray shaded area represents 95% confidence intervals. The vertical bars represent point estimates from a binning estimator (Hainmueller et al., Reference Hainmueller, Mummolo and Xu2019), dividing the data into terciles. There are only two bins in the left panel because 3 is the first and second tercile in the data. We reverse the Anti-Russia scale from the PAP to better align with hypothesis 2b.

Figure 4. Effects of Tips & Tricks intervention on belief in 1) false/misleading articles, 2) true articles, and 3) discernment between false/misleading and true articles. Note: Lines represent 95% confidence intervals. Per the PAP, these estimates are all from models adjusted for covariates. See Appendix Table H2 for unadjusted models.
For all three sets of hypotheses, we refer the reader to Online Appendix O for exploratory results not presented. As seen in Online Appendix P, in all cases but the categorical variable for the digital media intervention (Robustness checks 3),Footnote 24 our robustness checks validate the main findings from our hypotheses.
Discussion & conclusion
Overall and among those who prefer Ukrainian, we find that evaluating content in one’s non-preferred language reduces belief in false and true stories in equal measure and, therefore, does not help with discernment. Moreover, the effect on discernment may actually be negative for those who prefer Russian. This lack of improvement in discernment (Guay et al., Reference Guay, Berinsky, Pennycook and Rand2023) is conceptualized as being neutral from a normative perspective, and for those who prefer Russian, our results suggest a normatively negative effect on discernment.
Indeed, our estimates show that the dual reduction in the belief in both True and False news stories when evaluating content in one’s non-preferred language only occurs for those who prefer Ukrainian, as those who prefer Russisan either increase their belief or do not shift their evaluations, on average. One potential explanation for this heterogeneous effect is, as we discussed in the theory section, that respondents could use language to infer truthfulness. Therefore, individuals may have developed a heuristic associating Ukrainian-language content with greater factual accuracy in reporting than Russian-language content.
Recent scholarship has also documented a major shift in those who prefer Russian to Ukrainian in post-invasion Ukraine (Kulyk, Reference Kulyk2024). Therefore, our findings have important implications for the Ukrainian population. First, because many Ukrainians who used to prefer Russian now prefer Ukrainian, on average Ukrainians are likely to become more distrustful overall of Russian-language media sources, thereby making it harder for Russia and its allies to manipulate the Ukrainian population. Second, for those who continue to prefer Russian, forcing Ukrainian-language content on them will likely not have positive benefits in terms of resilience to misinformation because Russian-prefering Ukrainians already appear equally skeptical of Russian language content. Future research could also deepen these findings by probing in more detail whether there is variation in effects by topics or type of article.
Our findings also extend outside of Ukraine. One potentially important population in this regard is Spanish language users in the United States, where recent research has shown that Latinos who rely on Spanish language social media are more likely to believe false political narratives than those who rely on English language social media (Abrajano et al., Reference Abrajano, Garcia, Pope, Vidigal, Tucker and Nagler2024). Our findings could also extend to other post-Soviet countries where there is a fear of Russia, and the majority language is not Russian, but there are minority groups of ethnic Russians who prefer to speak Russian. Although our results may not extend to every context in which most individuals are bilingual, we believe they can be informative in many.
Supplementary material
The supplementary material for this article can be found at https://doi.org/10.1017/XPS.2025.10011
Data availability
The data code and any additional materials (Erlich, Aslett, et al., Reference Erlich, Aslett, Graham and Tucker2025) required to replicate all analyses in this article are available at the Journal of Experimental Political Science Dataverse within the Harvard Dataverse Network, at https://doi.org/10.7910/DVN/YEAJ85.
Acknowledgements
We thank Rafael Campos-Gottardo for his research assistance help, and Sofiia Boklan, Dmytro Savchuk, and Valentyn Shurov for their help administering the project during challenging times.
Competing interests
The authors declare that there are no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
We gratefully acknowledge that the Center for Social Media and Politics at New York University is supported by funding from the John S. and James L. Knight Foundation, the Charles Koch Foundation, Craig Newmark Philanthropies, the William and Flora Hewlett Foundation, and the Siegel Family Endowment. Aaron Erlich acknowledges support from the Social Sciences and Humanities Research Council (SSHRC) grant 435-2018-1354.
Ethics statement
The research adheres to the American Political Science Association’s (APSA) Principles and Guidance for Human Subjects Research. Research Approval was obtained from the New York University Institutional Review Board (IRB)(FY2020-4541) and McGill University Research Ethics Board (REB) (#23-03-078-04). Informed consent was obtained from all participants included in the study and all subjects were debriefed. Participants were compensated for their participation by the survey panel provider (i.e., Info sapiens), not directly by the researchers.