Hostname: page-component-6bb9c88b65-spzww Total loading time: 0 Render date: 2025-07-23T19:27:22.110Z Has data issue: false hasContentIssue false

Measuring Judicial Ideology Through Text

Published online by Cambridge University Press:  08 July 2025

Jake S. Truscott*
Affiliation:
https://ror.org/02y3ad647 University of Florida
Michael K. Romano
Affiliation:
https://ror.org/05w0p9h92 Shenandoah University
*
Corresponding author: Jake S. Truscott; Email: jaketruscott@ufl.edu
Rights & Permissions [Opens in a new window]

Abstract

Explorations of ideology retain special significance in contemporary studies of judicial politics. While some existing methodologies draw on voting patterns and coalition alignments to map a jurist’s latent features, many are otherwise reliant on supplemental proxies – often directly from adjacent actors or via assessments from various prognosticators. We propose an alternative that not only leverages observable judicial behavior, but does so through jurists’ articulations on the law. In particular, we adapt a hierarchical factor model to demonstrate how latent ideological preferences emerge through the written text of opinions. Relying on opinion content from Justices of the Supreme Court, we observe a discernible correlation between linguistic choices and latent expressions of ideology irrespective of known preferences or voting patterns. Testing our method against Martin-Quinn, we find our approach strongly correlates with this validated and commonly used measure of judicial ideology. We conclude by discussing the intuitive power of text as a feature of ideology, as well as how this process can extend to judicial actors and institutions beyond the Supreme Court.

Information

Type
Research Note
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of the Law and Courts Organized Section of the American Political Science Association

Introduction

Terminology is an expression of language and language is an expression of ideology (Thompson, Reference Thompson1987; Woolard and Schieffelin, Reference Woolard and Schieffelin1994). The choices made by judges regarding how they express decisions through written opinions is thus an expression of preferences, which are shaped by both personal inclinations (Segal and Spaeth, Reference Segal and Spaeth2002), strategic machinations (Bailey and Maltzman, Reference Bailey and Maltzman2011; Bonneau et al., Reference Bonneau, Hammond, Maltzman and Wahlbeck2007), and considerations of the perceived audience (Baum, Reference Baum2009; Romano and Curry, Reference Romano and Curry2019). The ramifications of these choices are consequential for the interpretation of the law, as ideological differences ostensibly result in lexical variation employed to delineate facts and legal aspects pertinent to a decision. Put more simply: the words judges choose matter and are a reflection of their own ideological beliefs, which facilitate a considerable impact on how we know and speak about the law. An illustrative example is the contrasting use of terms such as “healthcare provider” – a neutral descriptor for medical professionals – and “abortionist” – a perceptively derogatory term endowed with legal significance through rulings like Dobbs v. Jackson Women’s Health Organization (2022). Despite serving the same lexical purpose, these terms carry distinct ideological meanings.

There remains a notion, particularly among some members of the legal academy and especially among Article III judges, that jurists are otherwise precluded from exhibiting ideology as a component of their decision-making. This belief retains special significance in American government, insofar as deeply rooted principals in the separation of powers system imply that judicial actors are insulated from the political influences ingrained within elected branches. However, studies of judicial decision-making routinely find support for voting behaviors being emblematic of distinctive ideological preferences. Even if we presume that judges resist actively considering their own ideologies when making decisions, former Chief Justice Rehnquist’s (1986) proclamation that judging does not occur in a vacuum seems indicative of the fact that judging may be a matter of perspective.Footnote 1 Perspective informs judges not only how to approach jurisprudence and adjudication but inevitably how they shape the law, which “by extension, has significant social and economic consequences for individual litigants and society” (Bonica and Sen, Reference Bonica and Sen2021, 97). As Romano and Curry (Reference Romano and Curry2019) note with respect to state supreme courts, decisions on the merits can often tell us the baseline directionality of the law – that is, whether a court swung liberal or conservative on a particular issue or remained neutral – but it cannot explain why. A deeper examination of the language judges choose when crafting opinions reveals not just what the law means, but also reflects on how judges think, and provides a richer tapestry of ideological cues that might be missed by simply looking at the votes (Bailey and Maltzman, Reference Bailey and Maltzman2011; Hinkle, Reference Hinkle2015; Songer and Lindquist, Reference Songer and Lindquist1996).

Here, we build on these measures by introducing a process that leverages the wealth of data obtained from written text. Using a Wordshoal approach, which has proven to be a reliable tool for hierarchical estimation of latent preferences in legislative speech (Lauderdale and Herzog, Reference Lauderdale and Herzog2016), we demonstrate how ideology emerges in legal text. We thus propose an alternative strategy for estimating spatial models of judicial ideology by relying on written opinions as a form of speech and expression of belief, aiming to demonstrate how lexical variation in these opinions can be used to estimate latent preferences. To showcase this, we examine non-unanimous cases – including majority opinions, (special) concurrences, and dissents – decided by the Supreme Court between the 2005 and 2022 terms. This time period, which corresponds with John Roberts’s tenure as Chief Justice, has been highlighted as having an increased level of partisanship and polarization develop between what is commonly believed to be a collegial court (Salamone, Reference Salamone2018).Footnote 2 As such, we believe that the Roberts Court represents a good “first test” for whether judicial voice, through the opinion, correlates with ideology. We contend that Justices form distinctive ideological voices through the creation and adherence to specific forms of precedent across their careers. Measuring judicial opinions in this way will offer better clarity to the creation of judicial regimes and the development of clearer concepts of judicial philosophy and how ideology impacts votes.

Measurement of the ideologies of judicial actors remains an important prospect. However, scholars and observers alike continue to debate the merits of quantitatively representing these features. From rudimentary labeling of jurists using proxies to comprehensive statistical measures using Bayesian processes and machine learning, several mechanisms exist to estimate these observed and latent preferences. These efforts have culminated in a rich body of methods for estimating judicial ideology. We argue that Wordshoal provides an additional step in the right direction toward furthering our collective ability to open the “black box” of judicial decision-making. In doing so, we believe that our measures represent an important contribution in furthering our understanding how judges’ latent ideologies become activated beyond mere voting. By considering how language reveals dormant aspects of ideology used to justify legal decision-making, we argue that our methodology better situates judicial ideology into a broader policy (or issue) space and engages with the nuances of judicial behavior. In doing so, we not only take the law seriously but further showcase how ideology is an important determinant of what becomes law.

Understanding judicial motives: The search for judicial ideology

Since Pritchett (Reference Pritchett1942) and Murphy (Reference Murphy1964), judicial scholarship has placed preeminent importance on reliability and accuracy in measuring the ideological positions of judges.Footnote 3 It is generally accepted within political science that judges’ policy preferences play an important role in judicial behavior and decision-making, both in the United States and comparatively abroad. However, measuring judicial preferences and the extent to which they are influenced by both internal and external factors has been the dominant point of contention for expanding our understanding of judicial behavior, as well as critiquing the limits of judicial ideology to explain decision-making.Footnote 4

Researchers have been implementing various measures of judicial ideology for several decades. Bonica and Sen (Reference Bonica and Sen2021) provide a comprehensive overview of this literature, which – as they describe – has exhibited considerable change in the breadth of methodological rigor. Understandably, the bulk of this research concerns the Supreme Court,Footnote 5 though scholars are often keen to warn against over-extrapolation of inferences from the Justices’ behaviors because it represents a small sample of decision-making and “places considerable importance on nine idiosyncratic individuals who are relatively unconstrained in their position atop the American judicial hierarchy (Bailey, Reference Bailey2007)” (Bonica and Sen, Reference Bonica and Sen2021, 98). Nonetheless, ideology – particularly when measured through judge-level features of partisanship – has proven to be reliable predictors of the Justices’ decision-making. We provide an overview for some of these various methodologies in Table 1, which includes both the source of observable data, as well as the approaches’ strengths and weaknesses posited by Bonica and Sen (Reference Bonica and Sen2021).Footnote 6

Table 1. Overview of Existing Measures of Judicial Ideology (Bonica and Sen, Reference Bonica and Sen2021)

1 See Segal and Cover (Reference Segal and Cover1989).

2 See Bonica and Sen (Reference Bonica and Sen2017).

4 See Bonica and Sen (Reference Bonica and Sen2017).

While political science has embraced the role of ideology in judicial decision-making, some legal scholarship and other observers remain unconvinced. As noted by Fischman and Law (Reference Fischman and Law2009), “there is little reason to expect those who practice or teach the craft of legal argument to embrace a body of research that questions the extent to which judicial decision-making is actually driven by legal argument” (134).Footnote 7 For as long as judicial scholars have amassed evidence that a principal component of judicial behavior is the sincere policy preferences of the judge (or judges), legal scholars have labeled their findings overtly political (Edwards, Reference Edwards1984), “born in a congeries of false beliefs,” (Tamanaha, Reference Tamanaha2009, 687-688), or unfortunately, “innocently ignorant” (Cross, Reference Cross1997, 251). The primary concern among these and other legal scholars is two-fold: First, either by methodological ignorance or biases against theoretical approaches that would minimize the role of “the law” in judging, legal research often contends that judicial scholars have claimed ideological biases in judges without appropriately measuring or explaining what is meant by “ideology” (Fischman and Law, Reference Fischman and Law2009). To be sure, methodological rigor by judicial scholars attempting to measure judicial ideology often minimizes the value of clear conceptualization of the subject, choosing instead to state simply that Justice X votes the way they do because they are very conservative (liberal). Ideology is a “highly flexible conceptual tool” (Gerring, Reference Gerring1997, 957) but often boils down to how actors organize and express their opinions and how those opinions are formed by values. For judicial ideology, this baseline is often used to operationalize ideology by focusing on outward actions such as votes, making it appear that courts are “just another political institution,” which has dangerous implications for judicial legitimacy (Gibson and Nelson, Reference Gibson and Nelson2017).

Second, legal scholarship contends that the measurement of ideology itself misses the point of the law by vastly ignoring the role of precedent and legal opinion in decision-making. Part of this, we expect, is born from longstanding notions of how common law principles emerge in judicial behavior. Students of the law are often schooled on the ethics of judicial decision-making, particularly as it relates to how stare decisis applies to synthesizing and formulating jurisprudence. Even in most senior legal institutions like supreme courts, a considerable majority of caseloads will consider routine (or even mundane) applications of the law. In these circumstances, especially when the role of a judge is to simply apply codified standards, elements of ideology and interpretation are effectively restrained. An opinion piece published by The New York Times entitled “The Supreme Court Is Not as Politicized as You May Think” (Donnelly and Loeb, Reference Donnelly and Loeb2023) draws further on this point by arguing that painting the Supreme Court as invariably political requires observers to ignore the vast majority of statutory litigation reviewed by the Justices, almost all of which end in total (or near) unanimity. Yet, while it may be true that much of the Court’s docket lacks definitive political salience (as do most dockets across the federal and state judiciaries),Footnote 8 this contention itself is seemingly the result of self-selecting observations to fit a broader thesis.Footnote 9 Unanimous opinions can certainly constrain ideological preferences, though often when ideology can be exchanged for some level of legal certainty in the case (Corley et al., Reference Corley, Steigerwalt and Ward2013). In addition, the Supreme Court’s discretionary powers grant the Justices the ability to think far more about policy goals than law (Baum, Reference Baum1997; Segal and Spaeth, Reference Segal and Spaeth2002), as it is “easy for them to find legal justification for whatever position they prefer” (Baum, Reference Baum1997, 64). And while the vast majority of scholars would agree that personal preference and policy are not the only thing that matters (Hansford and Spriggs, Reference Hansford and Spriggs2006; Richards and Kritzer, Reference Richards and Kritzer2002; Songer and Lindquist, Reference Songer and Lindquist1996), ideology often dictates which precedents are recognized and promulgated over time and across courts (Fix and Kassow, Reference Fix and Kassow2020; Hinkle, Reference Hinkle2015).

Understanding ideology with words: Text-based methods for ideological measurement

As inferred by Table 1, the breadth of measures for judicial ideology are considerably varied, both as it relates to sources of observable data and the inferences that can reliably be drawn from them. However, what remains an emerging and largely unexplored element of measurement is data originating from written (or spoken) text. As Bonica and Sen (Reference Bonica and Sen2021) observe, research leveraging tools for automated text analysis have emerged as a way of studying ideology across other actors and institutions, and especially as it relates to discerning individual and policy-level positions in elected legislatures like the United States Congress. At their core, these studies assume that language retains special significance as cues of underlying preferences. Given that ideology is understood as a system of belief organization and an attempt at understanding the world around us, language is central to its understanding since language provides us with meaning (Diermeier et al., Reference Diermeier, Godbout, Yu and Kaufmann2012; Thompson, Reference Thompson1987). “Intuitively,” according to Diermeier et al. (Reference Diermeier, Godbout, Yu and Kaufmann2012), “a political ideology specifies which issue positions go together, the ‘knowledge of what-goes-with-what’” (31).Footnote 10 While it is correct to say that language can evolve, and with it subtly change the ideological nature of words and their meaning, we presume that this is part of a holistic process of idea generation and refinement over time. This knowledge helps us better discern variance in the framing of particular issues and their overarching sentiment, and broadens considerations for understanding how to take language seriously as it changes,Footnote 11 as well as offering a key to understanding elite discourse and how it is promulgated in the mass public. Indeed, given the progressive development of automated tools and machine learning, a wealth of literature has developed or implemented comprehensive analyses of ideologically oriented Congressional speech, such as Wordscores (Laver et al., Reference Laver, Benoit and Garry2003), Wordfish (Slapin and Proksch, Reference Slapin and Proksch2008), and Wordshoal (Lauderdale and Herzog, Reference Lauderdale and Herzog2016).

Yet, applications of these approaches to the courts, and particularly the Supreme Court, remain few. Until recently, Lauderdale and Clark (Reference Lauderdale and Clark2014) remained the most pivotal development in this field, as they were able to use an autoregressive preference model to scale case-level ideology for Justices between 1946 and 2005. Others, including Hausladen et al. (Reference Hausladen, Schubert and Ash2020), similarly approached case-level positions using data from the Federal Courts of Appeals.Footnote 12 Even then, both of these approaches scale ideology as a reflection of the cases themselves, rather than judicial actors. More recently, Cope (Reference Cope2024) developed Jurist-Derived Judicial Ideology Scores (JuDJIS), which leverage “information…collected by professional survey firms commissioned by the Almanac of the Federal Judiciary, a [triennially] published initiative which surveys a stratified sample of qualified experts for each judge” (2).Footnote 13 The result is a dynamic measure of judicial ideology recovered from a hierarchical n-gram analysis conditioned on how lawyers familiar with each judge would review their “ability; demeanor; trial practice/oral argument; settlement/opinion quality; and ideology.”

Using text to determine judicial ideology

To better understand how opinion language acts as a signal of judicial ideology, we must first imagine that the outcome in any case exists within a relatively confined “case space” (Lax, Reference Lax2011). As part of determining case outcomes, judges write opinions in attempts to justify and persuade others to the “correctness” of their decision (Romano and Curry, Reference Romano and Curry2019). Within each case, there exist a specific number of topics that judges can choose from to frame their argument. Judges choose their words carefully when crafting opinions (Romano and Curry, Reference Romano and Curry2019), but different judges will choose language to explain their argument. Choices in topic selection, framing, and language are conditioned upon an individual judge’s views and beliefs concerning what the right course of action should be in determining the outcome of any case – that is, on their ideology.

As we noted in an earlier section, attempts to map legal concepts and broader elements of lexical variation onto ideology remain few in studies of judicial politics. To date, the primary contributions to this literature remain Lauderdale and Clark (Reference Lauderdale and Clark2014), Hausladen et al. (Reference Hausladen, Schubert and Ash2020), and Cope (Reference Cope2024). Even then, none are directly applicable to our efforts. Our goal is to disassociate dichotomous voting behaviors or alternative proxies to instead place the burden of mapping ideology using the text of written opinions on the law. Decisions by the Supreme Court represent a unique case study to accomplish these ends.

However, important questions remain concerning if cues of ideology emerge in legal text, and more importantly how we might go about retrieving it. The answer to the first question must be yes. Irrespective of the underlying facts and merits of each case, opinions authored by the Justices represent policy positions. Particularly when the Court is divided, these positions can manifest across multiple opinions, and the Justices are able to use these opportunities to articulate their perspectives (Brace and Hall, Reference Brace and Hall1993; Hall, Reference Hall1987; Hettinger et al., Reference Hettinger, Lindquist and Martinek2004; Romano and Curry, Reference Romano and Curry2019; Songer, Reference Songer1982) Alternatively, when the Court is unified, we can assume they are speaking as one voice and thus conveying some measure of certainty that minimizes ideological cues (Corley et al., Reference Corley, Steigerwalt and Ward2013). In essence, given that the Justices are unconstrained in providing concurrences and dissents, we can derive comprehensive elements of their perspectives in a dimension that is more robust than whether they voted to affirm or reverse.Footnote 14

The second question is clearly more complicated, though research in ideology as a function of articulated positions in legislative settings provides an intuitive path forward. In particular, we leverage work by Lauderdale and Herzog (Reference Lauderdale and Herzog2016) by applying Wordshoal to opinions by the Supreme Court between the 2005 and 2022 terms. At its core, this methodology employs a two-stage estimation strategy to retrieve ideal points from expressed positions offered during debates in the House of Representatives. The Justices’ opinions lend themselves well to this approach, given that the motivations for their decision-making are deterministic of their preferences.Footnote 15

The focus of our work toward yielding inferences from broader lexical variance further lends itself well to this methodology. The Wordshoal approach places greater emphasis on how text found within documents can be used to articulate position-taking by its author (or authors). In a legislative setting, this processes addresses relative lexical variance from individual subsets (i.e., issue-specific debates) to draw generalizations of the individuals’ (legislators) broader positions. Further, given how attribution of equal predictive weight to specific words (or phrases) across debates of varying substance is sure to yield dubious results, the implications of particular word choice are sensibly restricted to individual debates. In essence, while certain words or phrases may inform us where to draw distinctions between competing coalitions on specific issues, assuming their influence is constant irrespective of subject matter would bias our results. Instead, Wordshoal offers a practical alternative that (1) addresses distinct lexical variance as emblematic for expressing “stated positions” (Lauderdale and Herzog, Reference Lauderdale and Herzog2016, 375) within reduced (debate) settings and subsequently (2) considers how that variance contributes to drawing broader discrimination in a legislator’s latent preferences. We discuss our amended implementation of Wordshoal below.

Implementing Wordshoal

Wordshoal is implemented using a two-stage hierarchical process, which uses a Poisson scaling model to retrieve debate-level estimates from Wordfish (Slapin and Proksch, Reference Slapin and Proksch2008) and subsequently aggregate to a general latent position for each individual actor. We review this process below and maintain notation from Lauderdale and Herzog (Reference Lauderdale and Herzog2016), though we indicate where substitutions are made for the Supreme Court versus their original legislative observations using brackets. In this substitution, our notation follows that the Justices [legislators] are indexed as i $ \in $ 1, 2… , N, cases [debates] as j $ \in $ 1, 2… , M, and words as k $ \in $ 1, 2… , K.

$$ {\omega}_{ijk}\sim \rho \left({\mu}_{ijk}\right) $$
$$ \rho \left({\mu}_{ij k}\right)=\exp \left({\nu}_{ij k}+{\lambda}_{jk}+{\kappa}_{jk}{\psi}_{ij}\right) $$

Being that “the frequency that [Justice] i will use word k in [case] j depends on a general rate parameter $ {\nu}_{ijk} $ for [Justice] i’s word usage in [case] j, word-[case] usage parameters $ {\lambda}_{jk} $ , $ {\kappa}_{jk} $ and the individual’s [case]-specific position $ {\psi}_{ij} $ . The $ {\nu}_{ijk} $ parameters capture the baseline rate of word usage in a given [opinion], which is simply a function of the length of the [opinion]. The $ {\lambda}_{jk} $ capture variation in the rate at which certain words are used. The $ {\kappa}_{jk} $ capture how word usage is correlated with the [Justice]’s [case]-specific position $ {\psi}_{ij} $ ” (Adapted from Lauderdale and Herzog, Reference Lauderdale and Herzog2016, 377).

With these estimates from case-level dimensions, we can subsequently use Wordshoal for aggregating to a (in this instance) single latent dimension with normally distributed error.

$$ {\displaystyle \begin{array}{c}{\psi}_{ij}\sim N\left({\alpha}_j+{\beta}_j{\theta}_i,{\tau}_i\right)\\ {}{\theta}_i\sim N\left(0,1\right)\\ {}{\alpha}_j,{\beta}_j\sim N\left(0,{\left(\frac{1}{2}\right)}^2\right)\\ {}{\tau}_i\sim \mathcal{G}\left(1,1\right)\end{array}} $$

Again, drawing from Lauderdale and Herzog (Reference Lauderdale and Herzog2016), “this specification means that the primary dimension of word usage variation in individual [cases] can be more or less strongly associated with the aggregate latent dimension $ \theta $ being estimated across all [cases], with either positive or negative polarity for any particular [case]. Essentially, this allows the model to select out those debate-specific dimensions that reflect a common dimension (larger estimates values of $ {\beta}_j $ ), while down-weighting the contribution of debates where the word usage variation across individuals seems to be idiosyncratic ( $ {\beta}_j\approx $ 0). The priors on $ {\theta}_i $ and $ {\theta}_j $ allow the model to remain agnostic about the relative polarity of individual [case] dimensions, while constraining the common latent dimension of interest to a standard normal scale” (378).

However, our methodology departs slightly from Lauderdale and Herzog (Reference Lauderdale and Herzog2016) in how we consider the relative responsibility associated with opinion authorship. Unlike Congressional debates, where we can directly associate speeches with particular legislators, opinion authorship on the Supreme Court dictates that a majority (or at least a plurality) of the Justices will coalesce to a single opinion. Similar concerns emerge when we recognize that Justices frequently join concurrences or dissents. It might be problematic to assume that Justices choosing to join opinions – whether they be majorities, concurrences, or dissents – should be granted the same weight in estimating latent ideal points as those who actually authored them. In one sense, we can infer that given the Justices’ capacity to author their own concurrences and dissents, failure to deviate from the language of the opinions they are joining is indicative of it being their own words, as well. However, it clearly lacks the same degree of authenticity that structured assumptions in Lauderdale and Herzog (Reference Lauderdale and Herzog2016) of directly tying each speech to individual legislators. To account for this, we introduce a procedure that assigns relative weights reflective of opinion authorship. We can assume that in any circumstance, authorship of a decision can be weighed fully as a representation of the writer’s voice, while choosing not to join an opinion bears no weight. Alternatively, joining majority opinions, dissents, or (special) concurrences might require additional considerations.Footnote 16 Lacking sufficient guidance from existing literature, we implement a combination of weighting arrangements to establish whether degrees of associative responsibility induce significant variance in the estimates.Footnote 17 We subsequently amend the second set of equations to include parameter $ {\phi}_{ij} $ ,Footnote 18 representing the relative responsibility of an opinion for Justice i in case j. Footnote 19

Scaling Supreme Court Justices between the 2005 and 2022 terms

To this point, we have introduced an amended application of Wordshoal as a methodology to estimate latent ideology as an expression of lexical variance without actually considering whether a Justice voted to affirm or reverse. That is, it draws on the expectation that the variance in word choice signals their relative positions and reflects separability among individuals. Granted, the notion of coalition structures raises potential concerns. Apart from our inability to draw the same assumptions of invariably associating opinions to single authors, it is important to further recognize that the Justices have a tendency to coalesce among like-minded partisans – particularly in response to divisive cases. With respect to our attempts to remove voting and ideology from our procedure, we are cognizant that these behaviors encapsulate aspects of both. However, given our application of different weighting schemes and the capacity for the Justices to voice divergent perspectives to any opinion, we are confident that we took the most appropriate steps to map the Justices’ ideal points reflective of the language used in their opinions.

We provide estimates of individual Justices reflective of 1,972 opinions authored in 678 non-unanimous cases between the 2005 and 2022 terms below (Figure 1),Footnote 20 which coincides with the Roberts Court (2005–present). Apart from the scheme that only applies to those who authored a particular opinion (‘Sole Weight’),Footnote 21 we observe noticeable consistency in our estimates. Furthermore, the breadth of error – which represents 95 percent confidence intervals – is shown to only be spacious for those whose presence in the data is more limited.Footnote 22 Alternatively, those present for longer periods of observation tended to display more consistency in their estimated positions.

Figure 1. Wordshoal Estimates (High Weight) by Justice and Weighting Scheme.

Note: Left-Right axis ( $ {\theta}_i $ ) scaled to represent progressive values of Liberalism to Conservatism. For more information regarding the associated weighting schemes, see the Supplemental Appendix (Table A1).

At face value, our estimates retain discernible validity. Insofar as we can provide normative assessments of the Justices’ categorical positions on the ideological spectrum, we know that Justices Sotomayor and Ginsburg should populate divergent positions from Thomas and Scalia. Further, the nuance of those we would expect to populate the center – for example, Justices Kennedy and O’Connor – is likewise reflected in our estimates. Alternatively, the ordinal rankings of our estimated positions require careful consideration. Notwithstanding measures of potential error, our rankings place Justices Ginsburg and Alito in the most polarized positions. The exact locations of the Justices relative to each other on our unbounded scale might lead to some debate concerning whether the ordinal and cardinal distances are, in a sense, the most accurate representation.Footnote 23 What we can say is that the relative location of the Justices on this scale lends credibility to its accuracy, particularly given the clustering of objectively like-minded Justices.

Further, another test of our estimates’ validity is to compare them to established methodologies. For this, we draw on ideal points estimated by Martin and Quinn (Reference Martin and Quinn2002), from which most measures of judicial ideology that consider the Justices’ voting behaviors draw their inspiration. Given the dynamic nature of their scaling procedure, we plot the average position for each Justice against those recovered using Wordshoal (Figure 2). However, while the directionality of both measures is symmetric, their relative magnitudes differ. To account for this, we standardize both using z-score normalization, ensuring they share a common scale with a mean of zero and a standard deviation of one. We then plot the relative variance of Justice-level estimates (Figure 3).

Figure 2. Wordshoal Estimates (High Weight) versus Martin and Quinn (Reference Martin and Quinn2002).

Note: This figure compares Wordshoal estimates using High Weights with the dynamic ideal point estimates by Martin and Quinn (Reference Martin and Quinn2002). Both represent Justice-level means across the 2005–2021 observation terms, where averages for Martin-Quinn were recovered from their post_mn variable. While the scales are not equal or normalized, the (left-right) progression of Liberalism to Conservatism is observed in both. Correlation = 96.6%.

Figure 3. Comparison of Static Wordshoal (High Weight) versus Martin-Quinn.

Note: Both axes represent the absolute values of each Justice’s estimated ideal point using Martin-Quinn and Static Wordshoal, where both scales are standardized using z-score normalization. Values for Martin-Quinn (Static Wordshoal) are measured using the average of post_mn ( $ {\theta}_i $ ) across the observation period. Points nearest to the diagonal segment indicate greater correlation between the relative placement of a Justice. Alternatively, values above (below) the diagonal indicate greater relative ideological placement in Martin-Quinn (Static Wordshoal). Correlation = 86.84%.

Notwithstanding the difference in observable behaviors used to estimate latent ideal points, we observe a strong correlation between our measure and Martin-Quinn. A particular difference is the relative position of Clarence Thomas, who is estimated to be discernibly more conservative in Martin-Quinn, as well as subtle variance in the Justices’ ordinal rankings. However, we are not of the mind that our measure is failing to discern what is arguably the “correct” ordering or placement of each Justice. Instead, it is important to recognize the incomplete nature of the data. Being that, while Martin-Quinn maps the full extent of each Justice’s tenure on the Court, ours only represents the full service of seven.Footnote 24 Accounting for those whose tenures (as of the 2022 term) are fully represented in the data, we observe strong similarity with Martin-Quinn’s ordinal rankings. We imagine that incorporating the absent data for the remaining eight Justices whose tenures are not fully represented could remediate any potential shortcomings in (ordinal) placement. Even then, the fact that we observe such high degrees of correlation with an established and respected measure of judicial ideology underscores the robustness of our methodology. The Justices are keen to express ideology as a principal feature of their authored opinions, and our methodology is able to map these distinct behaviors.

Discussion

Ideology serves as a principal element of judicial decision-making. Research that maps latent features of ideology among judicial actors has drawn from a multitude of direct and indirect observational behaviors. Those with the most robust methodologies – particularly Martin and Quinn (Reference Martin and Quinn2002) – center on estimates rooted in the Justices’ dichotomous voting behaviors. However, much as theirs and others’ approaches drew considerably from advancements in studies of legislative behavior,Footnote 25 we suggest a similar adoption strategy. Namely, our work adapted a two-stage Wordshoal procedure from Lauderdale and Herzog (Reference Lauderdale and Herzog2016) to instead represent Supreme Court Justices in a space reflective of their opinion writing behaviors. Our goal is not to suggest that this approach is invariably the best and should be adopted by researchers without any hesitation. Instead, we aimed to demonstrate that latent features of ideology are not only prevalent in judicial text, but that our approach provides a means to explore judicial behaviors through a lens that incorporates articulated perspectives on the law.

Using decisions authored between the 2005 and 2022 terms, we implemented Wordshoal to develop latent ideal points for each Justice serving during this observation period. Our efforts yielded impressive degrees of correlation with established measures like Martin and Quinn (Reference Martin and Quinn2002), largely irrespective of how we chose to weight authorship of majority opinions. While there are subtle differences in ordinal placement between our approach and Martin-Quinn, we expect these issues to be the result of incomplete data – which we plan to rectify in the future. Taken together, our approach provides a robust measure of latent judicial ideology rooted in lexical variance that, for all intents and purposes, mirrors normative assumptions of the Justices’ relative ideological positions.

However, like any measure of judicial ideology referenced in Table 1, we must recognize there are potential shortcomings. First, as we noted in an earlier section, our desire to fully remove coalition alignments as a predictor is something we are not sufficiently confident can fully be achieved. Unlike the initial application of Wordshoal in Congressional settings (Lauderdale and Herzog, Reference Lauderdale and Herzog2016), the Court’s reliance on coalescing to majority and other separate opinions generally negates our ability to draw 1:1 assessments of Justice-author observations. That is, at minimum, four Justices will coalesce to a single plurality opinion – though a minimum of five to constitute a majority opinions is understandably more common. To account for this, we introduced (1) any and all opinions within the scope of decided cases,Footnote 26 and (2) a collection of weighting schemes to induce variance in how particular opinions should be associated with the non-authoring Justices joining majorities, dissents, and (special) concurrences. Apart from circumstances where we restricted responsibility to only those who authored a particular opinion, we observe discernible continuity and reliability in the estimates. Even then, our “Sole Weights” scheme has a greater influence on scaled positions rather than ordinal rankings. Further, we are not convinced that this final restrictive scheme is the best representation of how we should discern ideological positions, if nothing for the fact that coalescing to a written opinion is a conscious decision that the Justices are by no means forced to make. If the Justices felt it was necessary to deviate from the language of opinions written to reflect the majority (minority) position, they are entirely free to do so. Choosing to (or not) is reflective of their authentic feelings and should be viewed as such to some tangible degree.

Alternatively, a second concern that we do not address fully in earlier sections is the static assumption underpinning the estimates. In particular, Wordshoal is estimated in two stages; first from a local (case-level) dimension, then subsequently to a common dimension reflective of the predictive weights we can derive from the larger collection of local dimensions. However, the estimates themselves are considered to be constant. As such, our estimates displayed in Figure 1 are static representations of each Justices’ latent positions across the observation period. Yet, given the broader literature regarding “ideological drift” (Epstein et al., Reference Epstein, Martin, Quinn and Segal2007), we know the Justices’ positions are unlikely to be static across their tenures. With this in mind, we explore a dynamic specification in the Supplemental Appendix by allowing for variance in $ {\theta}_i $ across successive terms. The preliminary results yield interesting inferences, particularly as it relates to improving the relative correlation to Martin and Quinn (Reference Martin and Quinn2002) using normalized scales to approximately 96 percent.

Future efforts will surely lead us to continue our efforts, particularly as it relates to overcoming these established shortcomings and continuing to extrapolate on the dynamic specification. However, the underlying motivation of this research is not to fill some longstanding gap in judicial politics literature or relitigate contentious debates that often arise with respect to measures of judicial ideology, but rather to demonstrate what we believe to be a strategy for estimating judicial ideology using an observation strategy that leverages articulated perspectives on the law. We retain that our long-term goal is to measure ideology among jurists across both state and federal courts.Footnote 27 We anticipate two primary obstacles. First is a sizeable and sufficient repository of published opinions from each of the 52 state courts of last resort, as well as from the federal district, appellate, and Supreme courts. As we demonstrate in Figure 1, the greatest Justice-level variance appears to emerge in response to insufficient data. Those whose tenures are most fully encapsulated in our observation period demonstrated much greater consistency in their estimated positions in the first dimensional space. Second, while retrieving an assortment of estimates spanning state and federal judicial institutions would surely be a worthwhile exercise, we are devoted to developing a strategy that would make them comparable across courts and hierarchy.

Supplementary material

The supplementary material for this article can be found at http://doi.org/10.1017/jlc.2025.10004.

Data availability statement

Replication materials can be found on the Harvard Dataverse. Available at: https://doi.org/10.7910/DVN/DLGKQF.

Footnotes

1 Justice Rehnquist’s exact quote was specific to the influence of public opinion and outside audiences on judges; however, it is not a far-step to say these external influences also influence the “philosophy” or ideology of a judge when making a decision. See Rehnquist’s comments on Judicial Isolation and Constitutional Law: https://www.nytimes.com/1986/04/17/us/required-reading.html

2 Note: We provide an extended discussion in the Supplemental Appendix regarding our selection of the Roberts Court as our principal observation period, as well as summary statistical information regarding the data (opinion-level observations) employed in this analysis.

3 While we touch briefly on the various ways scholars have measured judicial ideology, our intention here is not to relitigate which measurement strategy is (arguably) best, most accurate, or the most sincere representation. For an excellent and comprehensive discussion of measurement strategies, see Bonica and Sen (Reference Bonica and Sen2021).

4 The list of citations here would arguably be immeasurable and surely incomplete. We agree with the sentiment of Hughes et al. (Reference Hughes, Wilhelm and Wang2023, 1) that “judicial ideology is a cornerstone of public law” – one that all research must touch in some way.

5 “This is no surprise: the US Supreme Court is the most important court in the country and the final stopping point for many politically sensitive issues. Also, from a research standpoint, the Supreme Court lends itself relatively well to ideological measurement. First, unlike most other courts in the United States, all nine members of the Court hear and vote on cases together. Second, a small and tractable docket makes it possible to subjectively hand-code cases in order to estimate judicial ideology” (Bonica and Sen, Reference Bonica and Sen2021, 98).

6 We would note, however, that Table 1 is by no means fully encompassing. For additional information, including those related to measures of judicial ideology at the state level, see Brace et al. (Reference Brace, Langer and Hall2000), Hughes et al. (Reference Hughes, Wilhelm and Wang2023), and Windett et al. (Reference Windett, Harden and Hall2015), among others.

7 There is at least some indication that the legal community is shifting its view in regard to the role of ideology, thanks primarily to recent decision-making on the US. Supreme Court. Most recently, see Hawaiian State Supreme Court Justice Eddin’s concurrence in City and County of Honolulu v. Sunoco (2023), in which he states, “Enduring law is imperiled. Emerging law is stunted. A Justice’s personal values and ideas about the very old days suddenly control the lives of present and future generations.”

8 For discussions on political salience, see Bailey et al. (Reference Bailey, Kamoie and Maltzman2005) and Clark et al. (Reference Clark, Lax and Rice2015).

9 The authors in the aforementioned article are themselves self-selective in which cases deserve the most attention to generalize the Justices’ behaviors. They choose to ignore “constitutional cases” – that is, those that consider applications of constitutional principles, which are overwhelmingly decided without unanimity, and instead tend to reflect underlying notions of liberal and conservative positions. The key, we argue, is to recognize that any attempts to represent ideology – either empirically or otherwise – require scholars and observers to consider the broadest domain of judicial decision-making because unanimity is hardly an assurance, and decisions on the merits of great importance often entail divergent coalitions.

10 See also Poole (Reference Poole2007).

11 While we do not believe that this is an issue for our analysis here given the time period, researchers should consider how language evolves and the stability of a word’s meaning over time. Researchers should exercise caution when extending this and other models relating language to ideology, and be aware of how terms are being used and modified as they become ideologically entrenched. This is particularly important for opinion content, where language is often adopted and borrowed from past precedent, but is also part of a deeper consideration on the politics of language and word choice, and the link between communication and political meaning.

12 Lauderdale and Clark (Reference Lauderdale and Clark2014) focused on incorporating Latent Dirichlet Allocation (LDA) to map ideal points for Supreme Court onto dimensions reflective of the particular issues (topics) discussed in the cases the Justices are deciding. Alternatively, Hausladen et al. (Reference Hausladen, Schubert and Ash2020) use a supervised machine learning approach to predict the ideological ‘direction’ of case outcomes from the Courts of Appeals from associated opinion texts.

13 While this work is surely a core development in the literature, it remains that it is still reliant on proxies (i.e., survey responses from lawyers) to supplement judicial behavior.

14 This is not to say that they are not motivated to coalesce toward single opinions – that is, the notion that unanimity (at least theoretically) prescribes more associative weight to the perceived legitimacy of the Court’s decision is not lost on the Justices. However, there exist no institutional constraints on the Justices to author separate concurrences or dissents, if they so choose.

15 That is, the Justices’ decisions – expressed as both their votes and the reasoning articulated in their opinions – are reflective of their inherent beliefs. This assumption draws heavily from the attitudinal nature of Supreme Court decision-making (Segal and Spaeth, Reference Segal and Spaeth2002), being that the Justices are policy-oriented actors whose decision-making reflect authentic preferences.

16 While we recognize additional considerations exist concerning opinion authorship in collegial courts like the Supreme Court – in particular, norms concerning opinion assignment – we do not attempt to model such. While we may choose to remedy this in the future, we must acknowledge that Justices are generally unconstrained to author separate opinions. Given such a dynamic, we are confident their choice to sign-on to opinions or otherwise deviate represent behaviors independent of who is assigned authorship of the Court’s majority opinion.

17 For more information regarding the weighting schemes, please see the Supplemental Appendix (Table A1).

18 Such that $ {\psi}_{ij}\sim N\left({\alpha}_j+{\beta}_j{\theta}_i,{\tau}_i{\phi}_{ij}\right) $

19 The established approach for Wordshoal used by Lauderdale and Herzog (Reference Lauderdale and Herzog2016) is currently available in the quanteda package for R (Benoit et al., Reference Benoit, Watanabe, Wang, Nulty, Obeng, Müller and Matsuo2018). We would ask readers to direct themselves to Lauderdale and Herzog (Reference Lauderdale and Herzog2016) for a full account of the replication materials, though we will make ours (with adjustments for authorship responsibility weights) available upon publication.

20 Given the emphasis of first-stage Wordfish to draw distinctions between positions given divergent word choice on refined subjects (i.e., demarcating language most pivotal to establishing majority and minority coalitions), our estimates do not include unanimous decisions. Please see the Supplemental Appendix for more information regarding data collection and processing.

21 Where estimates for a particular Justice only consider opinions that they themselves authored.

22 For example, Justices William Rehnquist (who passed away in 2005), Sandra Day O’Connor (who retired in 2006), David H. Souter (retired 2009), and John Paul Stevens (retired 2010). This also includes newer Justices, such as Justices Amy Coney Barrett and, to a lesser extent, Brett Kavanaugh and Neil Gorsuch.

23 For example, the ordering of Justices Ginsburg, Sotomayor, Stevens, and Souter on the Left (most liberal), and Alito, Thomas, Scalia, and Kavanaugh on the Right (most conservative).

24 Chief Justice Roberts (Appointed 2005), as well as Justices Alito (2006), Sotomayor (2009), Kagan (2010), Gorsuch (2017), Kavanaugh (2018), and Barrett (2020).

25 Particularly Poole and Rosenthal (Reference Poole and Rosenthal1985); Clinton et al. (Reference Clinton, Jackman and Rivers2004), among others.

26 This provides additional robustness to the observation data, insofar as the inclusion of majority opinions, (special) concurrences, and dissents offers a broader accounting of justice-level perspectives, rather than simply addressing whether a Justice coalesced with the majority.

27 To date, the most reliable measure offering a means to bridge judges across the state and federal hierarchy has been Bonica and Woodruff’s (Reference Bonica and Woodruff2015) judicial “CFscore” methodology. Alternatively, the most reliable alternative for estimating ideology among state-level jurists is often viewed to be Party-Adjusted Judge Ideology (PAJID) scores (Brace et al., Reference Brace, Langer and Hall2000; Hughes et al., Reference Hughes, Wilhelm and Wang2023) and Windett et al.’s (Reference Windett, Harden and Hall2015) scores.

References

Bailey, Michael A. 2007. “Comparable preference estimates across time and institutions for the court, congress, and presidency.” American Journal of Political Science 51(3): 433448.10.1111/j.1540-5907.2007.00260.xCrossRefGoogle Scholar
Bailey, Michael A., Kamoie, Brian, and Maltzman, Forrest. 2005. “Signals from the tenth justice: The political role of the solicitor general in supreme court decision making.” American Journal of Political Science 49(1): 7285.10.1111/j.0092-5853.2005.00111.xCrossRefGoogle Scholar
Bailey, Michael A. and Maltzman, Forrest. 2011. The Constrained Court: Law, Politics, and the Decisions Justices Make. Princeton, NJ: Princeton University Press.Google Scholar
Baum, Lawrence 1997. The puzzle of judicial behavior.10.3998/mpub.14435CrossRefGoogle Scholar
Baum, Lawrence 2009. Judges and Their Audiences: A Perspective on Judicial Behavior. Princeton, NJ: Princeton University Press.10.1515/9781400827541CrossRefGoogle Scholar
Benoit, Kenneth, Watanabe, Kohei, Wang, Haiyan, Nulty, Paul, Obeng, Adam, Müller, Stefan, and Matsuo, Akitaka. 2018. “quanteda: An r package for the quantitative analysis of textual data.” Journal of Open Source Software 3(30): 774774.10.21105/joss.00774CrossRefGoogle Scholar
Bonica, Adam and Sen, Maya. 2017. “A common-space scaling of the american judiciary and legal profession.” Political Analysis 25(1): 114121.10.1017/pan.2016.10CrossRefGoogle Scholar
Bonica, Adam and Sen, Maya. 2021. “Estimating judicial ideology.” Journal of Economic Perspectives 35(1): 97118.10.1257/jep.35.1.97CrossRefGoogle Scholar
Bonica, Adam and Woodruff, Michael J.. 2015. “A common-space measure of state supreme court ideology.” The Journal of Law, Economics, and Organization 31(3): 472498.10.1093/jleo/ewu016CrossRefGoogle Scholar
Bonneau, Chris W., Hammond, Thomas H., Maltzman, Forrest, and Wahlbeck, Paul J.. 2007. “Agenda control, the median justice, and the majority opinion on the us supreme court.” American Journal of Political Science 51(4): 890905.10.1111/j.1540-5907.2007.00287.xCrossRefGoogle Scholar
Brace, Paul and Hall, Melinda G.. 1993. “Integrated models of judicial dissent.” The Journal of Politics 55(4): 914935.Google Scholar
Brace, Paul, Langer, Laura, and Hall, Melinda G.. 2000. “Measuring the preferences of state supreme court judges.” The Journal of Politics 62(2): 387413.10.1111/0022-3816.00018CrossRefGoogle Scholar
Clark, Tom S., Lax, Jeffrey R., and Rice, Douglas. 2015. “Measuring the political salience of supreme court cases.” Journal of Law and Courts 3(1): 3765.10.1086/679111CrossRefGoogle Scholar
Clinton, Joshua, Jackman, Simon, and Rivers, Douglas. 2004. “The statistical analysis of roll call data.” American Political Science Review 98(2): 355370.10.1017/S0003055404001194CrossRefGoogle Scholar
Cope, Kevin L. 2024. “An expert-sourced measure of judicial ideology.” Available at SSRN 4742254.10.2139/ssrn.4742254CrossRefGoogle Scholar
Corley, Pamela C., Steigerwalt, Amy, and Ward, Artemus. 2013. The Puzzle of Unanimity: Consensus on the United States Supreme Court. Stanford: Stanford University Press.Google Scholar
Cross, Frank B. 1997. “Political science and the new legal realism: A case of unfortunate interdisciplinary ignorance.” Nw. UL Rev. 92: 251.Google Scholar
Diermeier, Daniel, Godbout, Jean-François., Yu, Bei, and Kaufmann, Stefan. 2012. “Language and ideology in congress.” British Journal of Political Science 42(1), 3155.10.1017/S0007123411000160CrossRefGoogle Scholar
Donnelly, Nora and Loeb, Ethan. 2023. “The supreme court is not as politicized as you may think.” The New York Times. [Opinion].Google Scholar
Edwards, Harry T. 1984. “Public misperceptions concerning the politics of judging: Dispelling some myths about the dc circuit.” U. Colo. L. Rev. 56: 619.Google Scholar
Epstein, Lee, Martin, Andrew D., Quinn, Kevin M., and Segal, Jeffrey A.. 2007. “Ideological drift among supreme court justices: Who, when, and how important.” Nw. UL Rev. 101: 1483.Google Scholar
Fischman, Joshua B. and Law, David S.. 2009. “What is judicial ideology, and how should we measure it.” Wash. UJL & Pol’y 29: 133Google Scholar
Fix, Michael P. and Kassow, Benjamin J.. 2020. US Supreme Court Doctrine in the State High Courts. Cambridge: Cambridge University Press.10.1017/9781108891141CrossRefGoogle Scholar
Gerring, John 1997. “Ideology: A definitional analysis.” Political Research Quarterly 50(4): 957994.10.1177/106591299705000412CrossRefGoogle Scholar
Gibson, James L. and Nelson, Michael J.. 2017. “Reconsidering positivity theory: What roles do politicization, ideological disagreement, and legal realism play in shaping us supreme court legitimacy?Journal of Empirical Legal Studies 14(3): 592617.10.1111/jels.12157CrossRefGoogle Scholar
Giles, Michael W., Hettinger, Virginia A., and Peppers, Todd. 2001. “Picking federal judges: A note on policy and partisan selection agendas.” Political Research Quarterly 54(3): 623641.10.1177/106591290105400307CrossRefGoogle Scholar
Hall, Melinda G. 1987. “Constituent influence in state supreme courts: Conceptual notes and a case study.” The Journal of Politics 49(4): 11171124.10.2307/2130788CrossRefGoogle Scholar
Hansford, Thomas G. and Spriggs, James F.. 2006. The Politics of Precedent on the US Supreme Court. Princeton, NJ: Princeton University Press.10.1515/9780691188041CrossRefGoogle Scholar
Hausladen, Carina I., Schubert, Marcel H., and Ash, Elliot. 2020. “Text classification of ideological direction in judicial opinions.” International Review of Law and Economics 62: 105903.10.1016/j.irle.2020.105903CrossRefGoogle Scholar
Hettinger, Virginia A., Lindquist, Stefanie A., and Martinek, Wendy L.. 2004. “Comparing attitudinal and strategic accounts of dissenting behavior on the US courts of appeals.” American Journal of Political Science 48(1): 123137.10.1111/j.0092-5853.2004.00060.xCrossRefGoogle Scholar
Hinkle, Rachel K. 2015. “Legal constraint in the us courts of appeals.” The Journal of Politics 77(3): 721735.10.1086/681059CrossRefGoogle Scholar
Ho, Daniel E. and Quinn, Kevin M.. 2010. “Did a switch in time save nine?Journal of Legal Analysis 2(1): 69113.10.1093/jla/2.1.69CrossRefGoogle Scholar
Hughes, David A., Wilhelm, Teena, and Wang, Xuan. 2023. “Updating pajid scores for state supreme court justices (1970–2019).” State Politics & Policy Quarterly 23(4): 463470.10.1017/spq.2023.13CrossRefGoogle Scholar
Lauderdale, Benjamin E. and Clark, Tom S.. 2014. “Scaling politically meaningful dimensions using texts and votes.” American Journal of Political Science 58(3): 754771.10.1111/ajps.12085CrossRefGoogle Scholar
Lauderdale, Benjamin E. and Herzog, Alexander. 2016. “Measuring political positions from legislative speech.” Political Analysis 24(3): 374394.10.1093/pan/mpw017CrossRefGoogle Scholar
Laver, Michael, Benoit, Kenneth, and Garry, John. 2003. “Extracting policy positions from political texts using words as data.” American Political Science Review 97(2): 311331.10.1017/S0003055403000698CrossRefGoogle Scholar
Lax, Jeffrey R. 2011. “The new judicial politics of legal doctrine.” Annual Review of Political Science 14(1): 131157.10.1146/annurev.polisci.042108.134842CrossRefGoogle Scholar
Martin, Andrew D. and Quinn, Kevin M.. 2002. “Dynamic ideal point estimation via markov chain monte carlo for the us supreme court, 1953–1999.” Political analysis 10(2): 134153.10.1093/pan/10.2.134CrossRefGoogle Scholar
Murphy, Walter 1964. Elements of Judicial Strategy. Chicago: University of Chicago.Google Scholar
Owens, Ryan J. and Wedeking, Justin. 2012. “Predicting drift on politically insulated institutions: A study of ideological drift on the united states supreme court.” The Journal of Politics 74(2): 487500.10.1017/S0022381611001691CrossRefGoogle Scholar
Poole, Keith T. 2007. “Changing minds? Not in congress!Public Choice 131: 435451.10.1007/s11127-006-9124-yCrossRefGoogle Scholar
Poole, Keith T. and Rosenthal, Howard. 1985. “A spatial model for legislative roll call analysis.” American Journal of Political Science 357384.10.2307/2111172CrossRefGoogle Scholar
Pritchett, C. Herman 1942. “The voting behavior of the supreme court, 1941-42.” The Journal of Politics 4(4): 491506.10.2307/2125654CrossRefGoogle Scholar
Richards, Mark J. and Kritzer, Herbert M.. 2002. “Jurisprudential regimes in supreme court decision making.” American Political Science Review 96(2): 305320.10.1017/S0003055402000187CrossRefGoogle Scholar
Romano, Michael K. and Curry, Todd A.. 2019. Creating the Law: State Supreme Court Opinions and the Effect of Audiences. Routledge.10.4324/9780429461828CrossRefGoogle Scholar
Salamone, Michael F. 2018. Perceptions of a Polarized Court: How Division Among Justices Shapes the Supreme Court’s Public Image. Temple University Press.Google Scholar
Segal, Jeffrey A. and Cover, Albert D.. 1989. “Ideological values and the votes of us supreme court justices.” American Political Science Review 83(2), 557565.10.2307/1962405CrossRefGoogle Scholar
Segal, Jeffrey A. and Spaeth, Harold J.. 2002. The Supreme Court and the Attitudinal Model Revisited. Cambridge: Cambridge University Press.10.1017/CBO9780511615696CrossRefGoogle Scholar
Slapin, Jonathan B. and Proksch, Sven-Oliver. 2008. “A scaling model for estimating time-series party positions from texts.” American Journal of Political Science 52(3), 705722.10.1111/j.1540-5907.2008.00338.xCrossRefGoogle Scholar
Songer, Donald R. 1982. “Consensual and nonconsensual decisions in unanimous opinions of the united states courts of appeals.” American Journal of Political Science 225239.10.2307/2111037CrossRefGoogle Scholar
Songer, Donaold R. and Lindquist, Stefanie A.. 1996. “Not the whole story: The impact of justices’ values on supreme court decision making.” American Journal of Political Science 40(4): 10491063.10.2307/2111742CrossRefGoogle Scholar
Tamanaha, Brian Z. 2009. “The distorting slant in quantitative studies of judging.” BCL Rev. 50: 685.Google Scholar
Thompson, John B. 1987. “Language and ideology: A framework for analysis.” The Sociological Review 35(3): 516536.10.1111/j.1467-954X.1987.tb00554.xCrossRefGoogle Scholar
Windett, Jason H., Harden, Jeffrey J., and Hall, Matthew E.. 2015. “Estimating dynamic ideal points for state supreme courts.” Political Analysis 23(3): 461469.10.1093/pan/mpv016CrossRefGoogle Scholar
Woolard, Kathryn and Schieffelin, Bambi. 1994. Language ideology. annual reviews 23.10.1146/annurev.an.23.100194.000415CrossRefGoogle Scholar
Figure 0

Table 1. Overview of Existing Measures of Judicial Ideology (Bonica and Sen, 2021)

Figure 1

Figure 1. Wordshoal Estimates (High Weight) by Justice and Weighting Scheme.Note: Left-Right axis ($ {\theta}_i $) scaled to represent progressive values of Liberalism to Conservatism. For more information regarding the associated weighting schemes, see the Supplemental Appendix (Table A1).

Figure 2

Figure 2. Wordshoal Estimates (High Weight) versus Martin and Quinn (2002).Note: This figure compares Wordshoal estimates using High Weights with the dynamic ideal point estimates by Martin and Quinn (2002). Both represent Justice-level means across the 2005–2021 observation terms, where averages for Martin-Quinn were recovered from their post_mn variable. While the scales are not equal or normalized, the (left-right) progression of Liberalism to Conservatism is observed in both. Correlation = 96.6%.

Figure 3

Figure 3. Comparison of Static Wordshoal (High Weight) versus Martin-Quinn.Note: Both axes represent the absolute values of each Justice’s estimated ideal point using Martin-Quinn and Static Wordshoal, where both scales are standardized using z-score normalization. Values for Martin-Quinn (Static Wordshoal) are measured using the average of post_mn ($ {\theta}_i $) across the observation period. Points nearest to the diagonal segment indicate greater correlation between the relative placement of a Justice. Alternatively, values above (below) the diagonal indicate greater relative ideological placement in Martin-Quinn (Static Wordshoal). Correlation = 86.84%.

Supplementary material: File

Truscott and Romano supplementary material

Truscott and Romano supplementary material
Download Truscott and Romano supplementary material(File)
File 2.4 MB