To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge-org.demo.remotlog.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Research on citizens’ democratic support predominantly relies on surveys. However, the possibility of social desirability biases (SDBs) raises doubts about whether such instruments capture sincere attitudes. We search for evidence of SDB in measures of democratic attitudes in three studies. The first two leverage variations in survey mode (self-completion vs. face-to-face) in the European Social Survey’s Democracy module, drawing on evidence that interviewer absence encourages voicing socially undesirable opinions. The third uses a double-list experiment to estimate the prevalence of an anti-democratic attitude. Using data from as many as 24 European countries, we find no evidence that SDB inflates survey measures of democratic attitudes. These results contribute to our understanding of democratic attitudes and to methodological toolkit for interested scholars.
Chapter Six begins by looking at how Americans of different racial and ethnic stripes think about politics and how these views have changed over time. This chapter looks not only at racial divisions in policy preferences but also at racial differences in public trust and confidence in institutions. Excerpts examine the echo chamber and skepticism over polling and the measurement of public opinion.
Most public opinion research in China uses direct questions to measure support for the Chinese Communist Party (CCP) and government policies. These direct question surveys routinely find that over 90 per cent of Chinese citizens support the government. From this, scholars conclude that the CCP enjoys genuine legitimacy. In this paper, we present results from two survey experiments in contemporary China that make clear that citizens conceal their opposition to the CCP for fear of repression. When respondents are asked directly, we find, like other scholars, approval ratings for the CCP that exceed 90 per cent. When respondents are asked in the form of list experiments, which confer a greater sense of anonymity, CCP support hovers between 50 per cent and 70 per cent. This represents an upper bound, however, since list experiments may not fully mitigate incentives for preference falsification. The list experiments also suggest that fear of government repression discourages some 40 per cent of Chinese citizens from participating in anti-regime protests. Most broadly, this paper suggests that scholars should stop using direct question surveys to measure political opinions in China.
Social scientists use list experiments in surveys to estimate the prevalence of sensitive attitudes and behaviors in a population of interest. However, the cumulative evidence suggests that the list experiment estimator is underpowered to capture the extent of sensitivity bias in common applications. The literature suggests double list experiments (DLEs) as an alternative to improve along the bias-variance frontier. This variant of the research design brings the additional burden of justifying the list experiment identification assumptions in both lists, which raises concerns over the validity of DLE estimates. To overcome this difficulty, this paper outlines two statistical tests to detect strategic misreporting that follows from violations to the identification assumptions. I illustrate their implementation with data from a study on support toward anti-immigration organizations in California and explore their properties via simulation.
The incumbent-led subversion of democracy represents the most prevalent form of democratic backsliding in recent decades. A central puzzle in this mode of backsliding is why these incumbents enjoy popular support despite their actions against democracy. We address this puzzle using the case of Philippine President Rodrigo Duterte. Although some Philippine analysts have speculated that his popularity was inflated due to social desirability bias (SDB) among survey respondents, there has been limited empirical examination. Our pre-registered list experiment surveys conducted in February/March 2021 detected SBD-induced overreporting at about 39 percentage points in face-to-face surveys and 28 percentage points in online surveys. We also found that the poor Mindanaoans, and those who believed their neighbors supported Duterte, were more likely to respond according to SDB. These possibly counter-intuitive results should be interpreted with caution because the survey was conducted during the height of the COVID-19 lockdown, and the findings cannot necessarily be extrapolated to the other period of his presidency. Nevertheless, this study suggests that preference falsification could be an alternative explanation for the puzzle of popular incumbents in democratic backsliding.
Time preferences may explain public opinion about a wide range of long-term policy problems with costs and benefits realized in the distant future. However, mass publics may discount these costs and benefits because they are later or because they are more uncertain. Standard methods to elicit individual-level time preferences tend to conflate risk and time attitudes and are susceptible to social desirability bias. A potential solution relies on a costly lab-experimental method, convex time budgets (CTB). We present and experimentally validate an affordable version of this approach for implementation in mass surveys. We find that the theoretically preferred CTB patience measure predicts attitudes toward a local, delayed investment problem but fails to predict support for more complex, future-oriented policies.
Edited by
Ruth Kircher, Mercator European Research Centre on Multilingualism and Language Learning, and Fryske Akademy, Netherlands,Lena Zipp, Universität Zürich
This chapter shows how semi-structured interviews can contribute to the study of language attitudes. It pays particular attention to how understanding interviews as contextually and socially situated speech events, shaped by the spatial and temporal context in which they take place and the relationship between interviewer(s) and interviewee(s), is crucial for the analysis and interpretation of interview data. It addresses the strengths of using interviews to investigate attitudes (e.g. that they may bring to light new information, new topics, and new dimensions to established knowledge) as well as their limitations (e.g. that participants may say what they believe the interviewer wants to hear or agree with the interviewer’s questions, regardless of their content). Following a discussion of the key practical issues of planning and research design including constructing an interview protocol, choosing the language or variety to use in the interview, and presenting multiple languages or varieties in interview transcripts, it explains how the qualitative data resulting from semi-structured interviews can be analysed thematically. The chapter ends with an illustration of interview methodology on the basis of a case study of attitudes towards Cypriot Greek in London’s Greek Cypriot diaspora.
Edited by
Ruth Kircher, Mercator European Research Centre on Multilingualism and Language Learning, and Fryske Akademy, Netherlands,Lena Zipp, Universität Zürich
Following on from the previous chapter on questionnaire-based elicitation of quantitative data, this chapter outlines how open-ended questionnaire items can be used to elicit qualitative language attitudes data. These items invite the respondent to freely answer a question with a few words, sentences, or a paragraph of free writing, thereby eliciting idiosyncratic responses. Open-ended items provide complex and potentially unexpected information on the different attitude components and can thus play a complementary role to closed-ended items in the evaluation of attitudes. The chapter guides the reader through a wide range of ways to use open-ended items and discusses their strengths as well as weaknesses. Building on the preceding chapter, key issues of study design are added, including the choice of open-ended question types and factors that inform decisions of participant sampling. The chapter instructs the reader how to pilot a questionnaire and how to conduct (inductive or deductive) qualitative content analysis. Finally, it addresses ethical concerns of privacy and confidentiality. A case study on attitudes towards different varieties of English in Fiji serves as illustration of the main points made in the chapter.
Question effects are important when designing and interpreting surveys. Question responses are influenced by preceding questions through ordering effects. Identity Theory is employed to explain why some ordering effects exist. A conceptual model predicts respondents will display identity inertia, where the identity cued in one question will be expressed in subsequent questions regardless of whether those questions cue that identity. Lower amounts of identity inertia are found compared to habitual inertia, where respondents tend to give similar answers to previous questions. The magnitude of both inertias is small, suggesting they are only minor obstacles to survey design.
Polls asking respondents about their beliefs in conspiracy theories have become increasingly commonplace. However, researchers have expressed concern about the willingness of respondents to divulge beliefs in conspiracy theories due to the stigmatization of those ideas. We use an experimental design similar to a list experiment to decipher the effect of social desirability bias on survey responses to eight conspiratorial statements. Our study includes 8290 respondents across seven countries, allowing for the examination of social desirability bias across various political and cultural contexts. While the proportion of individuals expressing belief in each statement varies across countries, we observe identical treatment effects: respondents systematically underreport conspiracy beliefs. These findings suggest that conspiracy beliefs may be more prominent than current estimates suggest.
It is often assumed that consumers’ willingness to pay (WTP) for eco-labeled products in research settings is not because of a desire for environmental protection, but rather that they are socially compelled to make decisions that reflects favorably on them, limiting the validity of findings. Using a second-price Vickrey experimental auction, this study found higher WTP for an eco-labeled product than a comparable good, but that social desirability bias, measured by the Marlowe–Crowne Social Desirability Scale, was not a significant predictor of WTP. Instead, environmental consciousness, environmental knowledge, education, and available information were stronger predictors of WTP for eco-labeled goods.
Identifying taxpayers who engage in noncompliant behaviour is crucial for tax authorities to determine appropriate taxation schemes. However, because taxpayers have an incentive to conceal their true income, it is difficult for tax authorities to uncover such behaviour (social desirability bias). Our study mitigates the bias in responses to sensitive questions by employing the list experiment technique, which allows us to identify the characteristics of taxpayers who engage in tax evasion. Using a dataset obtained from a tax office in Jakarta, Indonesia, we conducted a computer-assisted telephone interviewing survey in 2019. Our results revealed that 13% of the taxpayers, old, male, corporate employees, and members of a certain ethnic group had reported lower income than their true income on their tax returns. These findings suggest that our research design can be a useful tool for understanding tax evasion and for developing effective taxation schemes that promote tax compliance.
List experiments are a widely used survey technique for estimating the prevalence of socially sensitive attitudes or behaviors. Their design, however, makes them vulnerable to bias: because treatment group respondents see a greater number of items (J + 1) than control group respondents (J), the treatment group mean may be mechanically inflated due simply to the greater number of items. The few previous studies that directly examine this do not arrive at definitive conclusions. We find clear evidence of inflation in an original dataset, though only among respondents with low educational attainment. Furthermore, we use available data from previous studies and find similar heterogeneous patterns. The evidence of heterogeneous effects has implications for the interpretation of previous research using list experiments, especially in developing world contexts. We recommend a simple solution: using a necessarily false placebo statement for the control group equalizes list lengths, thereby protecting against mechanical inflation without imposing costs or altering interpretations.
Fully randomized conjoint analysis can mitigate many of the shortcomings of traditional survey methods in estimating attitudes on controversial topics. This chapter explains how we applied conjoint analysis at seven universities and describes the population of participants in our experiments.
Debates over diversity on campus are intense, they command media attention, and the courts care about how efforts to increase diversity affect students’ experiences and attitudes. Yet we know little about what students really think because measuring attitudes on politically charged issues is challenging. This book adopts an innovative approach to addressing this challenge.
Women tend to under-report or misreport their abortion experiences, mainly because abortion is considered a sensitive issue for cultural, religious, political or other reasons in many countries across the world. Turkey, where induced abortion is an increasingly sensitive issue due to intense statements against induced abortion on religious grounds by influential politicians, and a hidden agenda to prohibit the practice, especially in public health facilities, in recent years, is no exception. This study focused on the increase in level of misreporting of induced abortion in Turkey and its link to social desirability bias using pooled data from 1993 and 2013 Turkish Demographic and Health Surveys. A probabilistic classification model was used to classify women’s reported abortions. The findings confirmed that the level of misreporting of induced abortions has increased from 18% to 53% among all terminated pregnancies over the period 1993–2013 in Turkey. This marked increase, especially among women in the lower socioeconomic sections of society, may be largely associated with the prevailing political environment, and increase in social stigmatization against induced abortion in Turkey over recent decades.
This short report exploits a unique opportunity to investigate the implications of response bias in survey questions about voter turnout and vote choice in new democracies. We analyze data from a field experiment in Benin, where we gathered official election results and panel survey data representative at the village level, allowing us to directly compare average outcomes across both measurement instruments in a large number of units. We show that survey respondents consistently overreport turning out to vote and voting for the incumbent, and that the bias is large and worse in contexts where question sensitivity is higher. This has important implications for the inferences we draw about an experimental treatment, indicating that the response bias we identify is correlated with treatment. Although the results using the survey data suggest that the treatment had the hypothesized impact, they are also consistent with social desirability bias. By contrast, the administrative data lead to the conclusion that the treatment had no effect.
Much research examining gender bias in politics analyzes responses to explicit survey questions asking individuals whether they prefer male over female leaders or agree that male political leaders are superior. Drawing insights from the measurement of other types of prejudice, this article explores the methodological shortcomings of a widely used question of this type. Analyzing the results of two surveys—one national and one state-level—I compare response patterns to a standard, highly explicit question that is frequently administered by the Pew Research Center with those for a modestly altered item that employs multiple strategies to reduce social desirability bias. Compared with the alternative measure, the conventional item seriously underreports prejudice against women leaders. Moreover, the underreporting of bias is especially prevalent among individuals belonging to groups that are strong advocates of gender equality.
Prior research demonstrates that responses to surveys can vary depending on the race, gender, or ethnicity of the investigator asking the question. We build upon this research by empirically testing how information about researcher identity in online surveys affects subject responses. We do so by conducting an experiment on Amazon’s Mechanical Turk in which we vary the name of the researcher in the advertisement for the experiment and on the informed consent page in order to cue different racial and gender identities. We fail to reject the null hypothesis that there is no difference in how respondents answer questions when assigned to a putatively black/white or male/female researcher.