Hostname: page-component-cb9f654ff-h4f6x Total loading time: 0 Render date: 2025-09-03T13:46:46.259Z Has data issue: false hasContentIssue false

Misinformation Among Migrants: Evidence from Mexico and Colombia

Published online by Cambridge University Press:  01 September 2025

Antonella Bandiera
Affiliation:
Department of Political Science, ITAM: Instituto Tecnologico Autonomo de Mexico, Mexico City, Mexico
Daniel Rojas*
Affiliation:
Department of Political Science, The University of British Columbia, Vancouver, BC, Canada
*
Corresponding author: Daniel Rojas; Email: daniel.rojaslozano@ubc.ca
Rights & Permissions [Opens in a new window]

Abstract

This paper examines the effectiveness of media literacy interventions in countering misinformation among in-transit migrants in Mexico and Colombia. We conducted experiments to assess whether well-known strategies for fighting misinformation are effective for this understudied yet particularly vulnerable population. We evaluate the impact of digital media literacy tips on migrants’ ability to identify false information and their intentions to share migration-related content. We find that these interventions can effectively decrease migrants’ intentions to share misinformation. We also find suggestive evidence that asking participants to consider accuracy may inadvertently influence their sharing behavior by acting as a behavioral nudge, rather than simply eliciting their sharing intentions. Additionally, the interventions reduced trust in social media as an information source while maintaining trust in official channels. The findings suggest that incorporating digital literacy tips into official websites could be a cost-effective strategy to reduce misinformation circulation among migrant populations.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of American Political Science Association

Introduction

The spread of misinformation through digital platforms poses significant challenges for migrants, potentially influencing their decision-making processes and exposing them to various risks during their journeys. The United Nations Refugee Agency (UNHCR) recently informed that social media platforms such as Facebook and TikTok are becoming a widely used source of information for individuals deciding to cross one of the world’s most dangerous migration passways, the Darien Gap (UNHCR 2023).Footnote 1 At the same time, Human Rights Watch has raised concerns about smugglers exploiting these platforms to disseminate misleading information about journey risks and available services, further complicating migrants’ decision-making processes (Ragozzino and Pappier Reference Ragozzino and Pappier2023). However, online misinformation about the migration itinerary is not limited to this journey. Anecdotal reports suggest that migrants traveling to the United States via Mexico frequently encounter online rumors and deception regarding border openings and asylum policies (Gómez and Schmidt Reference Gómez and Schmidt2023). Migrants from the Middle East and Africa heading to Europe face similar challenges (Mercy Corps 2018).

Despite the widespread recognition of migrants’ exposure to deceptive online information from unofficial sources (see, e.g., Dekker et al. Reference Dekker, Engbersen, Klaver and Vonk2018; Siegel, Wolff, and Weinstein Reference Siegel, Wolff and Weinstein2024), there is still limited knowledge about how to combat online misinformation among migrants effectively; the growing research on strategies to counter online misinformation focuses on interventions among the general public (see Aslett et al. Reference Aslett, Sanderson, Godel, Persily, Nagler, Bonneau and Tucker2021; Pennycook et al. Reference Pennycook, Epstein, Mosleh, Arechar, Eckles and Rand2021; Porter and Wood Reference Porter and Wood2022; Arechar et al. Reference Arechar, Allen, Berinsky, Cole, Epstein, Garimella, Gully, Lu, Ross, Stagnaro, Zhang, Pennycook and Rand2023; Pereira et al. Reference Pereira, Bueno, Nunes and Pavão2023; Offer-Westort, Rosenzweig, and Athey Reference Offer-Westort, Rosenzweig and Athey2024). But do these strategies work on migrants?

Migrants, particularly those in transit, must cross unfamiliar territories and navigate changing policies. This creates an urgent need for timely and accurate information about their journey and destination. The time-sensitive nature of this type of information, combined with migrants’ optimism about their chances of successfully arriving at their destination (Beber and Scacco Reference Beber and Scacco2022; Bah et al. Reference Bah, Batista, Gubert and McKenzie2023), makes them particularly vulnerable to misinformation and deceptive online messages.

Building on the growing empirical literature on online misinformation, we propose and evaluate the impact of cost-effective, scalable strategies aimed at reducing migrants’ exposure to and dissemination of misleading information. We contribute to this existing work by exploring whether established misinformation-combating strategies work on an understudied group that faces unique information-related challenges.

We conducted online experiments in Mexico (Study 1, n = 716) and Colombia (Study 2, n = 688) in 2024, sampling in-transit migrants using Facebook ads and randomly exposing these participants to media literacy tips. Mexico and Colombia are key locations for testing strategies to combat online misinformation among migrants. First, these countries are important transit points for international migrants heading to the United States. Second, many in-transit migrants in these countries own cellphones and access the internet daily (Rojas Reference Rojas2023). Finally, smugglers and human traffickers actively spread online misinformation in these areas (Ragozzino and Pappier Reference Ragozzino and Pappier2023).

Study 1 evaluated how a media literacy intervention affected migrants’ ability to identify false versus true information and their intentions to share misinformation. Study 2 substantially modified the treatment from the first study – most notably by focusing on the tips proven most effective in the literature (Guess et al. Reference Guess, McGregor, Pennycook and Rand2024) – and concentrated exclusively on sharing intentions rather than measuring accuracy discernment.Footnote 2 Our findings show that media literacy interventions can decrease migrants’ intention to share migration-related misinformation, even though they did not significantly improve the ability to identify false information. Additional analyses indicate that while these interventions decreased trust in social media as a source of migration-related information, they did not affect trust in official sources.

The “connected migrant”

Migrants rely on the internet to learn about potential destinations (Holland and Peters Reference Holland and Peters2020), communicate with smugglers (Gillespie, Osseiran and Cheesman Reference Gillespie, Osseiran and Cheesman2018), and share border-crossing information and strategies (Noori Reference Noori2022). Yet, migrants acknowledge that they lack credible online information related to their journey (Borkert, Fisher and Yafi Reference Borkert, Fisher and Yafi2018) and must implement ad-hoc strategies to check the accuracy of the information they consume (Alencar Reference Alencar2018).

While technology reduces information and communication costs, it also exposes migrants to misinformation (Gillespie, Osseiran and Cheesman Reference Gillespie, Osseiran and Cheesman2018). Migrants use digital technologies to provide and search for online information at every step of their journey (Mancini et al. Reference Mancini, Sibilla, Argiropoulos, Rossi and Everri2019), but at the same time, human traffickers spread misinformation through online platforms to profit from irregular migration (Beber and Scacco Reference Beber and Scacco2022). This circumstance makes the internet and digital technologies a double-edged sword for migrants, particularly those in transit.

Existing strategies based on literacy tips and pre-bunking could be useful to increase migrants’ capacity to identify potential misinformation. Pre-bunking through inoculation – exposing people to misinformation, warnings, and identifying strategies – has been shown to reduce vulnerability against future false information and its impact (Lewandowsky and van der Linden Reference Lewandowsky and van der Linden2021). Fact checks can also be effective at correcting misperceptions (Nyhan Reference Nyhan2021; Bowles et al. Reference Bowles, Croke, Larreguy, Liu and Marshall2023). However, strategies that rely on attaching warnings or fact-checking circulating misinformation may be limited. Much of the false information targeting migrants circulates at considerable speed, and verifying it is costly.

Another approach is to provide digital literacy tips. This strategy has shown promise in improving users’ ability to discern true from false information and reducing the spread of misinformation. This “vertical approach” moves citizens to consider the quality of each piece of information through the assessment of headlines, sources, or URLs, for example, and simple reminders to consider accuracy before sharing can significantly improve discernment and reduce the spread of false content (Guess et al. Reference Guess, McGregor, Pennycook and Rand2024).

The effectiveness of these interventions for migrants is an open question. Migrants face unique constraints – they often need to make quick decisions based on available information, may have limited time and cognitive resources due to the stresses of migration, and face higher stakes when acting on potentially false information. The lack of resources and reliance on social media platforms as a source of information may be problematic for this population. Given these constraints, we focus on testing the vertical approach, as these simple literacy tips have shown promising results while being both cost-effective and potentially scalable to reach larger migrant populations.

Study 1: Mexico

Research design

In the first study, we used Facebook ads to recruit international migrants located in Mexico who did not plan to stay there for the next 12 months. Mexico is one of the largest Facebook markets in Latin America (Facebook 2024) and the most important crossing country for in-transit migrants heading to the United States. Once eligibility was verified,Footnote 3 we collected demographic information and then randomly assigned participants to one of the two experiments included in the survey.Footnote 4 In the misinformation experiment, which is the main focus of this paper, we randomly assigned participants to one of three groups. In the control group ( $n = 244$ ), participants read a neutral vignette about global warming and related environmental tips. In the Tips condition ( $n = 239$ ), participants received a vignette explaining misinformation and its dangers in the migration context, along with three media literacy tips for identifying false information: check the source, check the quality, and review the content.Footnote 5 Finally, in the Tips + Example condition ( $n = 233$ ), participants received the same content as in Tips, plus an example Facebook post containing misinformation. Besides the tips, the vignette highlighted specific features that indicated the post’s misleading nature. These example posts were randomly selected from a curated bank of Facebook posts that our research team had previously identified as containing misinformation.

To measure outcomes, we present individuals with five Facebook posts about migration, at least one of which contains misinformation. For each post, we measure participants’ ability to discern true from false information, whether they wish to verify misinformation and accurate information, and their willingness to share false and accurate information, keeping the order of the questions the same across posts.Footnote 6 We use these metrics to create our outcome variables: the proportion of accurate posts correctly identified as accurate (Perceived Accuracy (A)); the proportion of false posts identified as accurate (Perceived Accuracy (F)); the proportion of accurate and false news correctly identified as such (Classification Accuracy); the proportion of posts with misinformation that participants want to verify (Verification Tendency Rate); the proportion of posts with accurate information that participants want to verify (Accurate Verification Tendency Rate); the proportion of posts with accurate information that participants intend to share (Sharing (A)); the proportion of posts with false information that participants intend to share (Sharing (F)). The primary outcome variables, thus, range from 0 to 1.

In addition to the pre-registered outcomes, we report discernment outcomes, which capture the extent to which migrants believe or share accurate news relative to false news. For these outcomes, a positive regression coefficient indicates that the intervention decreases belief in or sharing of false news more than it decreases belief in or sharing of accurate news (see Guay et al. Reference Guay, Berinsky, Pennycook and Rand2023).

We also measure participants’ trust in various information sources utilizing a 4-point Likert scale, ranging from 1 (“do not trust at all’’) to 4 (“trust a lot’’). For each source, we dichotomize responses by coding values 3 or 4 as 1 and all other responses as 0. We then calculate the average across the binarized responses to build the Trust Sources outcome variable.

We measure all outcomes pre- and post-treatment, using different posts each time to avoid repetition.Footnote 7 We report means and standard deviations for all outcome variables in Tables A3 and A2.

Characteristics of the sample

Participants come from Guatemala (19.2%), Haiti (12.4%), and Venezuela (12.3%), with smaller proportions from other Latin American countries. The sample was predominantly male (61.5%), single (36.2%), and with a median age of 34 years, which is consistent with the characteristics exhibited by Latin American migrants. A majority of participants reported having no steady source of income (69.5%). Geographically, participants were distributed across major Mexican transit cities, with significant concentrations in border cities like Tijuana (16.9%) and Ciudad Juárez (10.3%) and in Mexico City (33.6%), where many await humanitarian visas that allow them to journey to the Northern border. The United States was overwhelmingly the preferred destination country by migrants (67.1%).Footnote 8 Section A.7 demonstrates that the treatment assignment was successfully randomized.

Results

Figure 1 shows the effects of both experimental arms on the accuracy and sharing outcomes. While neither treatment shows significant effects across outcomes, the regression coefficients for sharing intentions among respondents exposed to the Tips + Example treatment are negative, which suggests a lower propensity to share information among treated respondents compared to those in the placebo group. However, the observed reduction in sharing intent is imprecisely estimated ( ${p_{{\rm{Sharing}}\left( {\rm{A}} \right)}}$ = 0.24, ${p_{{\rm{Sharing}}\left( {\rm{F}} \right)}}$ = 0.15). The results reported in Figure 1 hold regardless of respondents’ level of motivated reasoning (Table A7), trust in people (Table A8), and digital literacy (Table A9). Notably, the treatments increased the perceived accuracy of accurate posts among women (Table A10).Footnote 9 Figure 2 shows no effect on the trustworthiness of any of the sources of information typically used by migrants. We report null effects for Verification Tendency Rate and Accurate Verification Tendency Rate in Figure A11. Results from regressions estimated controlling for outcome variables measured at baseline are virtually the same (Figure A12).

Figure 1. Effects of Misinformation Treatments in Mexico: Average Treatment Effect (ATE) of tips, or tips and examples, on classification accuracy (mean ${{\rm{\;}}_{placebo}}$ = 0.583), perceived accuracy for accurate news (mean ${{\rm{\;}}_{placebo}}$ = 0.371), perceived accuracy for false news (mean ${{\rm{\;}}_{placebo}}$ = 0.276), accuracy discernment (mean ${{\rm{\;}}_{placebo}}$ = 0.095), accurate news sharing intentions (mean ${{\rm{\;}}_{placebo}}$ = 0.217), fake news sharing intentions (mean ${{\rm{\;}}_{placebo}}$ = 0.189), and sharing discernment (mean ${{\rm{\;}}_{placebo}}$ = 0.029). Results from ordinary least squares (OLS) models with robust standard errors and 95% confidence intervals. Regression results in Table C15.

Figure 2. Effects of Misinformation Treatments in Mexico: Average Treatment Effect (ATE) of tips, or tips and examples, on trust in information sources (composite index, mean ${{\rm{\;}}_{placebo}}$ = 0.627) and its individual components (placebo mean for Gov. = 0.725, Newspapers = 0.648, Facebook = 0.623, TikTok = 0.570, Twitter/X = 0.574, WhatsApp = 0.623). Results from OLS models with robust standard errors and 95% confidence intervals. Regression results in Table C18.

One plausible explanation for the lack of treatment effects – despite the intervention’s demonstrated success in other contexts – is that asking first about accuracy influenced participants’ responses across treatment groups. Prior research shows that prompting individuals to consider accuracy can reduce sharing intentions and increase attentiveness to misinformation (Pennycook et al. Reference Pennycook, Epstein, Mosleh, Arechar, Eckles and Rand2021), but may also reduce discernment when both accuracy and sharing questions are asked (Epstein et al. Reference Epstein, Sirlin, Arechar, Pennycook and Rand2023). In our control group, participants correctly classified as misinformation approximately 72% of false posts (i.e., classification accuracy for false posts), while only 19% indicated an intention to share it. The high rate of posts classified as false suggests that asking participants to evaluate accuracy may have induced a generalized skepticism: participants often misclassified true posts as false. Accuracy questions likely led participants to scrutinize all content more closely and default toward labeling information as false. As a result, sharing intentions were low across treatment groups. This measurement-induced skepticism may help explain the absence of discernible treatment effects.

While our study design prevents us from directly testing whether accuracy questions cause changes in sharing behavior – as this would require an experimental condition without accuracy questions – we show below that there is a substantial difference in baseline sharing rates between Study 1 (with accuracy questions) and Study 2 (without accuracy questions), which provides suggestive evidence for this effect. This measurement artifact may have potentially obscured treatment effects in Study 1.

Study 2: Colombia

Research design

Study 2 modified the research design in various ways. In particular, we redesigned and pre-registered our intervention and survey based on our initial null findings in Study 1 and [11]’s evidence on which tips work to improve sharing discernment.Footnote 10 First, we enhanced the visual presentation of the literacy tips to increase engagement and comprehension (Figure B17). Moreover, to increase compliance with the treatment, before presenting the flyers, we told individuals that we would ask them a question about the information afterward. The question asked participants to recall the number of tips they had just read in the flyer.Footnote 11 Second, drawing from Guess et al. (Reference Guess, McGregor, Pennycook and Rand2024), we streamlined our intervention to focus on three tips that had demonstrated strong effects: consider the source and URL domain, think about how accurate the content is, and maintain skepticism toward headlines.Footnote 12 Third, we eliminated the accuracy outcome measure and focused solely on sharing intent as an outcomeFootnote 13 to avoid potential contamination effects on sharing intentions, as research has shown that simply asking people to evaluate accuracy can serve as an intervention (Pennycook et al. Reference Pennycook, Epstein, Mosleh, Arechar, Eckles and Rand2021; Epstein et al. Reference Epstein, Sirlin, Arechar, Pennycook and Rand2023). Indeed, instead of nudging all participants to consider accuracy, as in Study 1, we only offer this nudge as a treatment now, only to those randomly assigned to the Tips condition. We also refrained from measuring outcomes at baseline, and to keep the survey duration to a minimum, we also eliminated the example. Finally, we conducted the study with a sample of in-transit migrants in Colombia.Footnote 14

Characteristics of the sample

Most participants in our sample are from Venezuela (94.8%). The sample is relatively young, with 36 years being the median age. Most individuals are females (60%) and single (44%), and more than 60% of those surveyed report having no steady source of income (Figure B15). We asked participants to report which top platforms and sources they used to search for migration-related information. Figure B16 shows the most common platform is Facebook (41.6%), followed by government websites (33%) and newspapers (19.6%). In terms of sources, people claim to rely on the US Government (46.2%), followed by family (21.2%) and NGOs (18.3%). Table B13 reports results from balance tests and indicates a successful treatment randomization.

Results

Figure 3 presents the main results. These results suggest that asking the control group to consider accuracy as an outcome might have obscured a potentially significant treatment in the case of Study 1.Footnote 15 Indeed, when we focus on the effect of the Tips, we see that the proportion of posts with false information that treated participants want to share falls by 21 p.p. Interestingly, the share of accurate posts that participants want to share also falls, but by 11 p.p. The fact that these effects are statistically different from each other ( $p$ = 0.02) suggests that the tips may be activating participants’ capacity to detect false information, as they are comparatively less willing to share it. Our results show that the tips increase sharing discernment and are consistent with those reported in the literature.Footnote 16 Yet, our finding that tips reduced sharing of accurate posts (albeit less than false posts) suggests that our intervention may operate through a combination of increased accuracy discernment and general skepticism.

Figure 3. Effects of Misinformation Treatments in Colombia: Average Treatment Effect (ATE) of tips on accurate news sharing intentions (mean ${{\rm{\;}}_{placebo}}$ = 0.746), fake news sharing intentions (mean ${{\rm{\;}}_{placebo}}$ = 0.627), and sharing discernment (difference between sharing rates of accurate and fake news, mean ${{\rm{\;}}_{placebo}}$ = 0.119). Results from OLS models with robust standard errors and 95% confidence intervals. Regression results in Table C20.

Additionally, there is some suggestive evidence that treated participants may become distrustful. Figure 4 shows that trust across the most popular social media platforms appears to decrease. However, the effect is only significant for Facebook.Footnote 17 Importantly, there is no effect on trust in reliable sources of information such as the government or newspaper sources, which suggests that the treatment is increasing discernment rather than triggering an overall skepticism among migrants. Table B14 shows that the differences between the treatment effects on trust in Facebook and in government and newspaper sources are significant.

Figure 4. Effects of Misinformation Treatments in Colombia: Average Treatment Effect (ATE) of tips on trust in information sources (composite index, mean ${{\rm{\;}}_{placebo}}$ = 0.583) and its individual components (placebo mean for Gov. = 0.692, Newspapers = 0.589, Facebook = 0.605, TikTok = 0.516, Twitter/X = 0.543, WhatsApp = 0.554). Results from OLS models with robust standard errors and 95% confidence intervals. Regression results in Table C21.

Discussion

There are reasons to expect in-transit migrants to be particularly vulnerable to online misinformation. Despite this growing concern, existing research has focused on the general public. We contribute experimental findings to an understudied population within the misinformation scholarship. Specifically, we conduct online experiments with samples of international in-transit migrants in Mexico and Colombia to test the effect of media literacy interventions.

We adjusted existing media literacy tips to make them relevant to migrants and found that these interventions decreased migrants’ intentions to share online migration-related information. The reduction is significantly larger when the information is false than when it is accurate. In line with existing research, we noted that measuring accuracy and sharing intentions in the same individuals prevented us from precisely identifying treatment effects. This highlights the importance of considering how measurement itself can impact outcomes (see, e.g., Epstein et al. Reference Epstein, Sirlin, Arechar, Pennycook and Rand2023).

Our findings regarding measurement effects align with emerging literature on accuracy prompts (Pennycook et al. Reference Pennycook, Epstein, Mosleh, Arechar, Eckles and Rand2021; Epstein et al. Reference Epstein, Sirlin, Arechar, Pennycook and Rand2023). Study 1, which included accuracy questions for all participants, showed substantially lower misinformation sharing intentions in the control group (19%) compared to Study 2’s control group (63%). This pattern suggests that accuracy questions themselves function as effective behavioral interventions, consistent with meta-analysis showing these prompts reduce misinformation sharing while enhancing discernment (Pennycook and Rand Reference Pennycook and Rand2022). Evaluating content accuracy before indicating sharing intentions likely makes participants more attentive to accuracy, shaping their subsequent judgments. However, there is also evidence that asking about sharing intentions after prompting participants to evaluate accuracy may reduce discernment (Epstein et al. Reference Epstein, Sirlin, Arechar, Pennycook and Rand2023). This could help explain the lack of significant treatment effects on perceived accuracy and sharing outcomes in Study 1. Additionally, research suggests potential ceiling effects when multiple media literacy strategies are combined (Guess et al. Reference Guess, McGregor, Pennycook and Rand2024). While we did not experimentally manipulate accuracy prompts, these findings provide compelling evidence for their role as behavioral nudges that may interact with other interventions.

There is growing concern that media literacy interventions can decrease trust in accurate news and reliable sources of information (Hameleers, Reference Hameleers2023). This possibility raises concerns that migrants exposed to these interventions may reduce trust in information channels and struggle to find reliable and timely information. However, although we found that the intervention in Study 2 decreased trust in social media, we did not find significant effects on trust in more traditional sources. One possibility is that our intervention increased skepticism in Facebook by warning about the reliability of the source and then presenting posts from that social network. However, the fact that the tips significantly increased sharing discernment suggests that there is no general skepticism in Facebook, but rather a selective, accuracy-based distrust.

This study shows that, given that migrants highly trust and frequently consult government sources, including these tips on official websites, can be a low-cost strategy with the potential to reduce misinformation circulation. Similarly, social media platforms could scale this initiative by providing tips at the top of users’ news feeds, as they already did in 2017 (see Guess et al. Reference Guess, Lerner, Lyons, Montgomery, Nyhan, Reifler and Sircar2020). We believe this would be a relevant initiative in developing countries, given the high rates of migrants using social media (Rojas Reference Rojas2023). Digital literacy tips are particularly valuable since migration-related information tends to spreads through private channels where strategies such as individual content warnings cannot be effectively implemented. An important open question is whether these effects on sharing intent last in the long term. Moreover, future research should explore behavioral outcomes.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/XPS.2025.10015

Data availability

The data, code, and any additional materials required to replicate all analyses in this article are available at the Journal of Experimental Political Science Dataverse within the Harvard Dataverse Network, at: https://doi.org/10.7910/DVN/XTMW3N.

Acknowledgements

We are grateful to Centro Latam Digital for supporting this research with funding and project implementation.

Competing interests

None.

Ethics statement

This research received institutional review board (IRB) approval from the University of British Columbia ( $\# $ H23-03684). This research adheres to APSA’s Principles and Guidance for Human Subjects Research. Section D in the supplemental appendix expands on the items above.

Footnotes

*

The pre-analysis plan and its amendment are available at https://osf.io/njkcu and https://osf.io/935yc.

This article has earned badges for transparent research practices: Open Data and Open. For details see the Data Availability Statement.

1 The Darien Gap is a dense jungle separating Colombia from Panama. The number of migrants crossing this jungle en route to the United States or Canada has steadily increased, surpassing 520,000 in 2023 (IOM, 2024).

2 The Research Design section of Study 2 details all of the changes across studies.

3 See screening questions in Section A.3.

4 Section A.2 presents the survey structure. Here we focus on the misinformation experiment. Section A.10 describes the second experiment on cybersecurity and online risks and presents its results.

5 Section A.6 presents the vignettes.

6 We disclosed at the end of the survey which posts contained false information.

7 These posts were obtained from popular Facebook pages on topics of migration. The research team fact-checked the information and identified posts that contained misinformation.

8 See descriptive data in Section A.4.

9 The $p$ of an F-test for the null hypothesis that the treatment effect is the same for women and men is 0.02.

10 The anonymized pre-registered amendment is available at https://tinyurl.com/35rn89vv.

11 Study 1 also included a pre-treatment attention check, though it was unrelated to the flyer content. Compliance rates were comparable across studies (87.4% in Mexico vs. 83.4% in Colombia), suggesting this methodological difference is unlikely to fully explain the differences across studies.

12 This represents a revision from Study 1’s tips, which focused on checking the source (but not the URL domain), quality, and content of information (Figure A9).

13 As in Study 1, we also report results for sharing discernment. Table B12 shows means and standard deviations for the outcome variables.

14 We used similar screening questions as in Study 1 (see Section B.1).

15 The average willingness to share misinformation in the control group in Study 2 is 62.7% as opposed to an average of 18.9% for the control group that answered the accuracy question in Study 1.

16 For example, Pennycook and Rand (Reference Pennycook and Rand2022) find through a meta-analysis that accuracy prompts increase sharing discernment by 71% on average. In our case, we find that the tips increase sharing discernment by 83% with respect to the mean of the control group.

17 This could be explained by the fact that all posts presented come from Facebook.

References

Alencar, Amanda. 2018. “Refugee Integration and Social Media: A Local and Experiential Perspective.” Information, Communication & Society 21(11): 1588–603.10.1080/1369118X.2017.1340500CrossRefGoogle Scholar
Arechar, Antonio A., Allen, Jennifer, Berinsky, Adam J., Cole, Rocky, Epstein, Ziv, Garimella, Kiran, Gully, Andrew, Lu, Jackson G., Ross, Robert M., Stagnaro, Michael N., Zhang, Yunhao, Pennycook, Gordon, and Rand, David G.. 2023. “Understanding and Combatting Misinformation Across 16 Countries on Six Continents.” Nature Human Behaviour 7(9): 1502–13.10.1038/s41562-023-01641-6CrossRefGoogle ScholarPubMed
Aslett, Kevin, Sanderson, Zeve, Godel, William, Persily, Nathaniel, Nagler, Jonathan, Bonneau, Richard, and Tucker, Joshua A.. 2021. “Testing the Effect of Information on Discerning the Veracity of News in Real Time.” Journal of Experimental Political Science 11: 115.Google Scholar
Bah, Tijan L., Batista, Catia, Gubert, Flore, and McKenzie, David. 2023. “Can Information and Alternatives to Irregular Migration Reduce ‘Backway’ Migration from The Gambia?Journal of Development Economics 165: 103153.10.1016/j.jdeveco.2023.103153CrossRefGoogle Scholar
Beber, Bernd, and Scacco, Alexandra. 2022. The Myth of the Misinformed Migrant? Survey Insights from Nigeria’s Irregular Migration Epicenter. Essen: RWI – Leibniz Institute for Economic Research.Google Scholar
Borkert, Maren, Fisher, Karen E., and Yafi, Eiad. 2018. “The Best, the Worst, and the Hardest to Find: How People, Mobiles, and Social Media Connect Migrants in (to) Europe.” Social Media + Society 4(1): 2056305118764428.10.1177/2056305118764428CrossRefGoogle Scholar
Bowles, Jeremy, Croke, Kevin, Larreguy, Horacio, Liu, Shelley, and Marshall, John. 2023. “Sustaining Exposure to Fact-Checks: Misinformation Discernment, Media Consumption, and Its Political Implications.” American Political Science Review: 124. https://doi.org/10.1017/S0003055424001394.Google Scholar
Dekker, Rianne, Engbersen, Godfried, Klaver, Jeanine, and Vonk, Hanna. 2018. “Smart Refugees: How Syrian Asylum Migrants Use Social Media Information in Migration Decision-Making.” Social Media + Society 4(1): 2056305118764439.10.1177/2056305118764439CrossRefGoogle Scholar
Epstein, Ziv, Sirlin, Nathaniel, Arechar, Antonio, Pennycook, Gordon, and Rand, David. 2023. “The Social Media Context Interferes with Truth Discernment.” Science Advances 9(9): eabo6169.10.1126/sciadv.abo6169CrossRefGoogle ScholarPubMed
Gillespie, Marie, Osseiran, Souad, and Cheesman, Margie. 2018. “Syrian Refugees and the Digital Passage to Europe: Smartphone Infrastructures and Affordances.” Social Media + Society 4(1): 2056305118764440.10.1177/2056305118764440CrossRefGoogle Scholar
Guay, Brian, Berinsky, Adam J., Pennycook, Gordon, and Rand, David. 2023. “How to Think About Whether Misinformation Interventions Work.” Nature Human Behaviour 7(8): 1231–33.10.1038/s41562-023-01667-wCrossRefGoogle ScholarPubMed
Guess, Andrew M., Lerner, Michael, Lyons, Benjamin, Montgomery, Jacob M., Nyhan, Brendan, Reifler, Jason, and Sircar, Neelanjan. 2020. “A Digital Media Literacy Intervention Increases Discernment Between Mainstream and False News in the United States and India.” Proceedings of the National Academy of Sciences 117(27): 15536–45.10.1073/pnas.1920498117CrossRefGoogle ScholarPubMed
Guess, Andrew, McGregor, Shannon, Pennycook, Gordon, and Rand, David. 2024. “Unbundling Digital Media Literacy Tips: Results from Two Experiments.” OSF. https://osf.io/u34fp.10.31234/osf.io/u34fpCrossRefGoogle Scholar
Gómez, Juan Arturo, and Schmidt, Samantha. 2023. “Crossing Jungle and Desert, Migrants Navigate a Sea of Misinformation.” The Washington Post. https://www.washingtonpost.com/world/2023/05/14/title-42-migrant-rumors-tiktok-whatsapp/?utm_source=substack&utm_medium=email.Google Scholar
Hameleers, Michael. 2023. “The (Un)Intended Consequences of Emphasizing the Threats of Mis- and Disinformation.” Media and Communication 11(2): 514.10.17645/mac.v11i2.6301CrossRefGoogle Scholar
Holland, Alisha C., and Peters, Margaret E.. 2020. “Explaining Migration Timing: Political Information and Opportunities.” International Organization 74(3): 560–83.10.1017/S002081832000017XCrossRefGoogle Scholar
IOM. 2024. DTM Panama—Flow Monitoring of Migrant Population—Darién (May 2024). Panama City: International Organization for Migration.Google Scholar
Lewandowsky, Stephan, and van der Linden, Sander. 2021. “Countering Misinformation and Fake News Through Inoculation and Prebunking.” European Review of Social Psychology 32(2): 348–84.10.1080/10463283.2021.1876983CrossRefGoogle Scholar
Mancini, Tiziana, Sibilla, Federica, Argiropoulos, Dimitris, Rossi, Michele, and Everri, Marina. 2019. “The Opportunities and Risks of Mobile Phones for Refugees’ Experience: A Scoping Review.” PLOS ONE 14(12): e0225684.10.1371/journal.pone.0225684CrossRefGoogle ScholarPubMed
Mercy Corps. 2018. “How Technology is Affecting the Refugee Crisis.” https://www.mercycorps.org/blog/technology-refugee-crisis.Google Scholar
Noori, Simon. 2022. “Navigating the Aegean Sea: Smartphones, Transnational Activism and Viapolitical In(ter)ventions in Contested Maritime Borderzones.” Journal of Ethnic and Migration Studies 48(8): 1856–72.10.1080/1369183X.2020.1796265CrossRefGoogle Scholar
Nyhan, Brendan. 2021. “Why the Backfire Effect Does Not Explain the Durability of Political Misperceptions.” Proceedings of the National Academy of Sciences 118(15): e1912440117.10.1073/pnas.1912440117CrossRefGoogle ScholarPubMed
Offer-Westort, Molly, Rosenzweig, Leah R., and Athey, Susan. 2024. “Battling the Coronavirus ‘Infodemic’ Among Social Media Users in Kenya and Nigeria.” Nature Human Behaviour 8(5): 823–34.10.1038/s41562-023-01810-7CrossRefGoogle ScholarPubMed
Pennycook, Gordon, and Rand, David G.. 2022. “Accuracy Prompts Are a Replicable and Generalizable Approach for Reducing the Spread of Misinformation.” Nature Communications 13(1): 2333.10.1038/s41467-022-30073-5CrossRefGoogle ScholarPubMed
Pennycook, Gordon, Epstein, Ziv, Mosleh, Mohsen, Arechar, Antonio A., Eckles, Dean, and Rand, David G.. 2021. “Shifting Attention to Accuracy Can Reduce Misinformation Online.” Nature 592(7855): 590–95.10.1038/s41586-021-03344-2CrossRefGoogle ScholarPubMed
Pereira, Frederico Batista, Bueno, Natália S., Nunes, Felipe, and Pavão, Nara. 2023. “Inoculation Reduces Misinformation: Experimental Evidence from Multidimensional Interventions in Brazil.” Journal of Experimental Political Science 11: 239–50.10.1017/XPS.2023.11CrossRefGoogle Scholar
Porter, Ethan, and Wood, Thomas J.. 2022. “Political Misinformation and Factual Corrections on the Facebook News Feed: Experimental Evidence.” Journal of Politics 84(3): 1812–17.10.1086/719271CrossRefGoogle Scholar
Ragozzino, Martina Rapido, and Pappier, Juan. 2023. “‘This Hell Was My Only Option’: Abuses Against Migrants and Asylum Seekers Pushed to Cross the Darién Gap.” https://www.hrw.org/report/2023/11/09/hell-was-my-only-option/abuses-against-migrants-and-asylum-seekers-pushed-cross.Google Scholar
Rojas, Daniel. 2023. Acceso y uso de datos móviles en poblaciones migrantes. Mexico City: Centro LATAM Digital.Google Scholar
Siegel, Alexandra, Wolff, Jessica, and Weinstein, Jeremy. 2024. “#Asylum: How Syrian Refugees Engage with Online Information.” Journal of Quantitative Description: Digital Media 4: 146.Google Scholar
UNHCR. 2023. “Sobreviviendo al Dariín: la traves´ıa de refugiados y migrantes por la selva.” https://www.acnur.org/publicaciones/sobreviviendo-al-darien-la-travesia-derefugiados-y-migrantes-por-la-selva.Google Scholar
Figure 0

Figure 1. Effects of Misinformation Treatments in Mexico: Average Treatment Effect (ATE) of tips, or tips and examples, on classification accuracy (mean${{\rm{\;}}_{placebo}}$ = 0.583), perceived accuracy for accurate news (mean${{\rm{\;}}_{placebo}}$ = 0.371), perceived accuracy for false news (mean${{\rm{\;}}_{placebo}}$ = 0.276), accuracy discernment (mean${{\rm{\;}}_{placebo}}$ = 0.095), accurate news sharing intentions (mean${{\rm{\;}}_{placebo}}$ = 0.217), fake news sharing intentions (mean${{\rm{\;}}_{placebo}}$ = 0.189), and sharing discernment (mean${{\rm{\;}}_{placebo}}$ = 0.029). Results from ordinary least squares (OLS) models with robust standard errors and 95% confidence intervals. Regression results in Table C15.

Figure 1

Figure 2. Effects of Misinformation Treatments in Mexico: Average Treatment Effect (ATE) of tips, or tips and examples, on trust in information sources (composite index, mean${{\rm{\;}}_{placebo}}$ = 0.627) and its individual components (placebo mean for Gov. = 0.725, Newspapers = 0.648, Facebook = 0.623, TikTok = 0.570, Twitter/X = 0.574, WhatsApp = 0.623). Results from OLS models with robust standard errors and 95% confidence intervals. Regression results in Table C18.

Figure 2

Figure 3. Effects of Misinformation Treatments in Colombia: Average Treatment Effect (ATE) of tips on accurate news sharing intentions (mean${{\rm{\;}}_{placebo}}$ = 0.746), fake news sharing intentions (mean${{\rm{\;}}_{placebo}}$ = 0.627), and sharing discernment (difference between sharing rates of accurate and fake news, mean${{\rm{\;}}_{placebo}}$ = 0.119). Results from OLS models with robust standard errors and 95% confidence intervals. Regression results in Table C20.

Figure 3

Figure 4. Effects of Misinformation Treatments in Colombia: Average Treatment Effect (ATE) of tips on trust in information sources (composite index, mean${{\rm{\;}}_{placebo}}$ = 0.583) and its individual components (placebo mean for Gov. = 0.692, Newspapers = 0.589, Facebook = 0.605, TikTok = 0.516, Twitter/X = 0.543, WhatsApp = 0.554). Results from OLS models with robust standard errors and 95% confidence intervals. Regression results in Table C21.

Supplementary material: File

Bandiera and Rojas supplementary material

Bandiera and Rojas supplementary material
Download Bandiera and Rojas supplementary material(File)
File 10.7 MB
Supplementary material: Link

Bandiera and Rojas Dataset

Link