To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge-org.demo.remotlog.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Use Case 4 in Chapter 7 explores the regulation of MDTs in the context of employment monitoring under the General Data Protection Regulation (GDPR), the Equality Acquis, the Platform Work Directive (PWD), and the Artificial Intelligence Act (AIA). Article 88 GDPR serves as a useful foundation, supported by valuable guidance aimed at protecting employees from unlawful monitoring practices. In theory, most MDT-based practices discussed in this book are already prohibited under the GDPR. Additionally, the EU’s robust equality acquis can effectively address many forms of discrimination in this sector. The AIA reiterates some existing prohibitions related to MDT-based monitoring practices in the workplace. However, a core challenge in employment monitoring lies in ensuring transparency and enforcement. There has long been a call for a lex specialis for data protection in the employment context, which should include a blacklist of prohibited practices or processing operations, akin to the one found in the PWD. Notably, processing and inferring mind data should be included among the practices on this blacklist.
The Introduction sets out the research question of the book viz: the question of whether future generations ought to be represented in the global legal order and institutions to address the climate change challenge and, assuming a positive answer to this, how best such representation should occur. The massive bias against the interests of future generations in current climate law and policy-making is demonstrated. This provides a powerful rationale as to why there is an urgent need to explore proxy-style mechanisms to represent future generations. The pragmatist methodology (in the tradition of John Dewey) of the book is explained. This involves analysing existing practices, and values which are incorporated into these values, and extending them to deal with new problems. The legal realism methodology of the book is also explained, including its application to the sources of international law. The strong links between the book and Earth System Governance scholarship are set out; finally, the structure of the book is explained.
This chapter examines conservative attacks on social media, and their validity. Conservatives have long accused the major social media platforms of left-leaning bias, claiming that platform content moderation policies unfairly target conservative content for blocking, labeling, and deamplification. They point in particular to events during the COVID-19 lockdowns, as well as President Trump’s deplatforming, as proof of such bias. In 2021, these accusations led both Florida and Texas to adopt laws regulating platform content moderation in order to combat the alleged bias. But a closer examination of the evidence raises serious doubts about whether such bias actually exists. An equally plausible explanation for why conservatives perceive bias is that social media content moderation policies, in particular against medical disinformation and hate speech, are more likely to affect conservative than other content. For this reason, claims of platform bias remain unproven. Furthermore, modern conservative attacks on social media are strikingly inconsistent with the general conservative preference not to interfere with private businesses.
This brief introduction argues that the current, swirling debates over the ills of social media are largely a reflection of larger forces in our society. Social media is accused of creating political polarization, yet polarization long predates social media and pervades every aspect of our society. Social media is accused of a liberal bias and “wokeness”; but in fact, conservative commentators accuse every major institution of our society, including academia, the press, and corporate America, of the same sin. Social media is said to be causing psychological harm to young people, especially young women. But our society’s tendency to impose image-consciousness on girls and young women, and to sexualize girls at ever younger ages, pervades not just social but also mainstream media, the clothing industry, and our culture more generally. And as with polarization, this phenomenon long predates the advent of social media. In short, the supposed ills of social media are in fact the ills of our broader culture. It is just that the pervasiveness of social media makes it the primary mirror in which we see ourselves; and apparently, we do not much like what we see.
This chapter explores key elements of AI as relevant to intellectual property law. Understanding how artificial intelligence works is crucial for applying legal regimes to it. Legal practitioners, especially IP lawyers, need a deep understanding of AI’s technical nuances. Intellectual property doctrines aim to achieve practical ends, and their application to AI is highly fact-dependent. Patent law, for example, requires technical expertise in addition to legal knowledge. This chapter tracks the development of AI from simple programming to highly sophisticated learning algorithms. It emphasizes how AI is rapidly evolving and that many of these systems are already being widely adopted in society. AI is transforming fields like education, law, healthcare, and finance. While AI offers numerous benefits, it also raises concerns about bias and transparency, among numerous other ethical implications.
As social and educational landscapes continue to change, especially around issues of inclusivity, there is an urgent need to reexamine how individuals from diverse linguistic backgrounds are perceived. Speakers are often misjudged due to listeners’ stereotypes about their social identities, resulting in biased language judgments that can limit educational and professional opportunities. Much research has demonstrated listeners’ biases toward L2-accented speech, i.e., perceiving accented utterances as less credible, less grammatical, or less acceptable for certain professional positions, due to their bias and stereotyping issues. Then, artificial intelligence (AI) technology has emerged as a viable alternative to mitigate listeners’ biased judgments. It serves as a tool for assessing L2-accented speech as well as establishing intelligibility thresholds for accented speech. It is also used to assess characteristics such as gender, age, and mood in AI facial-analysis systems. However, these AI systems or current technologies still may hold racial or accent biases. Accordingly, the current paper will discuss both human listeners’ and AI’ bias issues toward L2 speech, illustrating such phenomena in various contexts. It concludes with specific recommendations and future directions for research and pedagogical practices.
This chapter explores the single most important difference between Anglo-American and German/Continental trial procedures: bifurcation vs. unification. Should a court determine sentence at the same time as it adjudicates verdict? Or should the criminal process be divided, with sentencing taking place after conviction, in a separate ‘penalty phase’ of the criminal process? Common law (adversarial) jurisdictions take the bifurcated approach, while in civil law (inquisitorial) systems the sentencing decision is part and parcel of the decision to convict or acquit. The chapter investigates the merits of both approaches.
Comparing the two approaches to sentencing may yield important insights. Although neither system is likely to abandon its chosen methodology in favour of the alternative, there may be elements of each which can be adopted with a view to overcoming any structural deficiencies.
The U.S. Supreme Court is often regarded as an impartial arbiter of justice, yet various prejudices may influence its decisions. This article examines Supreme Court justices’ biases, focusing on how they invoke racialized stereotypes of criminality. We argue that justices are more likely to vote in favor of white, nonviolent litigants, reinforcing stereotypes that depict nonwhite defendants as inherently more criminal and violent. Drawing on the U.S. Supreme Court Database’s criminal procedure cases from 2005–2017, combined with an original dataset of litigants’ racial identities, we estimate a series of multilevel logistic regressions. Our findings show that litigant race, crime type, and justice ideology jointly shape judicial votes. We further investigate how bias appears in justices’ written opinions, revealing language that perpetuates racialized conceptions of criminality. Overall, our results underscore the Court’s role in constructing what it means to be both “criminal” and “nonwhite,” suggesting that the Court is not a neutral interpreter of law, but an institution shaped by broader social and political biases.
This chapter explains how to estimate population parameters from data. We introduce random sampling, an approach that yields accurate estimates from limited data. We then define the bias and the standard error, which quantify the average error of an estimator and how much it varies, respectively. In addition, we derive deviation bounds and use them to prove the law of large numbers, which states that averaging many independent samples from a distribution yields an accurate estimate of its mean. An important consequence is that random sampling provides a precise estimate of means and proportions. However, we caution that this is not necessarily the case, if the data contain extreme values. Next, we discuss the central limit theorem (CLT), according to which averages of independent quantities tend to be Gaussian. We again provide a cautionary tale, warning that this does not hold in the absence of independence. Then, we explain how to use the CLT to build confidence intervals which quantify the uncertainty of estimates obtained from finite data. Finally, we introduce the bootstrap, a popular computational technique to estimate standard errors and build confidence intervals.
Fairness in service robotics is a complex and multidimensional concept shaped by legal, social and technical considerations. As service robots increasingly operate in personal and professional domains, questions of fairness – ranging from legal certainty and anti-discrimination to user protection and algorithmic transparency – require systematic and interdisciplinary engagement. This paper develops a working definition of fairness tailored to the domain of service robotics based on a doctrinal analysis of how fairness is understood across different fields. It identifies four key dimensions essential to fair service robotics: (i) furthering legal certainty, (ii) preventing bias and discrimination, (iii) protecting users from exploitation and (iv) ensuring transparency and accountability. The paper explores how developers, policymakers and researchers can contribute to these goals. While fairness may resist universal definition, articulating its core components offers a foundation for guiding more equitable and trustworthy human–robot interactions.
Experimental jurisprudence draws methods and theories from an increasingly wide variety of fields, including psychology, economics, philosophy, and political science. However, researchers interested in legal thought have thus far paid relatively little attention to its origins in development. This chapter highlights an emerging approach that leverages methods and insights from developmental science to better understand the nature and development of adult intuitions about the law. By studying children’s earliest intuitions about rules, laws, and other topics, this “intuitive jurisprudence” approach can provide new methods and theoretical frameworks for experimental jurisprudence, as well as clarify places in which the law does or does not match human intuitions about justice. Already, developmental psychology and legal scholarship may converge to be mutually informative in a number of diverse areas, and this chapter reviews several, including: intent and punishment; fairness and procedural justice; ownership and property rights; trust in testimony and evidentiary issues; and social biases and equal protection under the law.
The series of cases discussed in Part III are humbling reminders of how intertwined our patients and their support systems are with healthcare practitioners. TJ, Jimmy, Mrs. Blue, and Mrs. Winthorpe all have unique experiences in different corners of the healthcare system. Each case touches on the familiar experience of a healthcare team identifying what they believe is in the best interest of patient, and there being a factor, often the patient themselves, complicating that coming to fruition. Their experiences, and different experiences of privilege and power, or disempowerment are salient elements of their stories. These “haunting” and morally distressing cases are revisited with an additional lens of diversity, equity, identity, and bias and considerations for how ethicists might more fully integrate these critical perspectives into ethics consultation.
Bias is a topic that has received intense academic study, but its importance within experimental jurisprudence has yet to be unpacked. To fill this lack, we make the following contributions in this chapter. First, we situate the topic within this newly named – but not necessarily new – academic movement: We present recent research on bias in the law and discuss whether it rightly fits within the remit of experimental jurisprudence. Second, continuing to draw on this recent research, we unpack issues that inhere to explorations of bias, ones that are important for understanding, in the experimental jurisprudence context, participants and the data they generate as well as researchers and the data they garner and interpret. Finally, we conclude by offering words of caution and guidance as bias research within experimental jurisprudence progresses.
As part of the legal test for bias, the courts have created a fictional fair-minded observer (the FMO) to act as a conduit for reasonable public perception. A number of scholars have raised concerns that the FMO bears no resemblance to an average member of the public or reasonably reflects general public opinion. This chapter presents our original empirical pilot study on expert versus lay attitudes to judicial bias. The study compares responses of legal insiders (lawyers and judges) and nonlegal experts with a basic understanding of the law (law students) to leading cases on judicial recusal. We use vignettes based on real cases from England, Australia, and Canada that dealt with different claims of judicial bias (covering issues of race, prejudgment, and more). The study may allow us to draw conclusions about the similarities and differences between legal experts and laypeople in relation to the perception of judicial bias, and we suggest ways the full study can address methodological limitations in the pilot that would allow us to draw those conclusions with greater confidence.
For a case-control study to be a suitable design, we need a good idea about the outcome of interest (or condition) described by a strong case definition. But what if we know quite a bit about the exposures we are interested in, but we are a little hazy on the potential outcomes associated with those exposures? If we consider a scientific question like the one posed in this chapter – What happens if you eat pizza and chips every day?’ – we have specifically identified the exposures of interest, but can only guess what the outcomes might be. Okay, we could probably make fairly educated guesses about some of the potential outcomes (weight gain being foremost among these), but there remains a level of uncertainty about their timing, magnitude and variety. What is really needed to answer a question like this is a ‘cohort study’, a type of observational study in which ‘cohorts’ of people (population groups who share certain characteristics, such as being in the same work environment, or who are born in the same year) are sorted into groups on the basis of whether they have or have not been exposed to specific health-related factors.
In Chapter 6 we heard about how we can identify and quantify associations between exposures and health outcomes within populations, and even between countries. We learnt how useful cross-sectional studies were for looking at a range of risk factors and outcomes as they exist in a defined population at a particular point in time. While they have a great number of advantages, it can sometimes be difficult to sort out the direction of the relationships identified using cross-sectional approaches – that is, current risk factors or exposures may not necessarily have caused current outcomes or diseases. If we want to move towards thinking about potential causal relationships, we need an approach that allows us to determine the relative strength of relationships between exposures and outcomes and provide some hints about temporality – that is, to give us a start on determining if the exposure preceded the health event. We will need this type of study to address question posed for this chapter – what might be causing all those headaches that health science students seem to complain about.
Arriving at evidence-based solutions requires strong evidence. Usually, this evidence will be derived from quality research, such as is often published in reputable scientific journals. But how do we know whether even these studies are good through and through? There is always the potential that pesky flaws, such as bias and confounding, might can beset even the most (otherwise) perfect of studies. This is why the methods taken to avoid bias and confounding are always well-described in all good published studies, as is the potential for remaining sources of error for which the design is (inevitably) unable to account, but which might still influence findings. There is always a bit of uncertainty about any evidence provided by studies and, to add to this, the very real possibility that we are not getting the full story at all times. In a phenomenon known as ‘publication bias’, even really high quality studies may not get published if they report non-significant results.
Graphs can help people arrive at data-supported conclusions. However, graphs might also induce bias by shifting the amount of evidence needed to make a decision, such as deciding whether a treatment had some kind of effect. In 2 experiments, we manipulated the early base rates of treatment effects in graphs. Early base rates had a large effect on a signal detection measure of bias in future graphs even though all future graphs had a 50% chance of showing a treatment effect, regardless of earlier base rates. In contrast, the autocorrelation of data points within each graph had a larger effect on discriminability. Exploratory analyses showed that a simple cue could be used to correctly categorize most graphs, and we examine participants’ use of this cue among others in lens models. When exposed to multiple graphs on the same topic, human judges can draw conclusions about the data, but once those conclusions are made, they can affect subsequent graph judgment.
This chapter analyzes challenges to AI decision-making based on anti-discrimination in the US, the UK, and Australia. Machine learning algorithms can be trained on datasets that contain human bias, thus resulting in predictions that are tainted with unfair discrimination. Anti-discrimination claims involve challenging the inputs of decision-making, such as the data or source code, and arguing that the flawed algorithm or data inputted into the AI system leads to discriminatory outcomes.
This book is about the science and ethics of clinical research and healthcare. We provide an overview of each chapter in its three sections. The first section reviews foundational knowledge about clinical research. The second section provides background and critique on key components and issues in clinical research, ranging from how research questions are formulated, to how to find and synthesize the research that is produced. The third section comprises four case studies of widely used evaluations and treatments. These case examples are exercises in critical thinking, applying the questions and methods outlined in other sections of the book. Each chapter suggests strategies to help clinical research be more useful for clinicians and more relevant for patients.