To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge-org.demo.remotlog.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Human Rights Act requires courts to decide cases in conformity with the rights protected by the European Convention on Human Rights in so far as possible. Employees must bring a claim under UK employment law and then the rights, whether at common law or under statute, should conform to the Convention rights such as the right to respect for private life, freedom to manifest a religion, and freedom of expression.
In the digital economy, quality is increasingly becoming the predominant variable of competition. Markets are expected to seek out that state of affairs in which product quality rather than efficiency is maximized. But an effective conceptual resolution of what constitutes product quality is more complex and elusive than previously thought, and there has been a widespread repudiation of the notion that dominant online platforms can be held accountable for failing to deliver something that a single descriptive standard would command them to produce. Furthermore, microeconomic theory provides little guidance for evaluating how adjustments in the level of competition in a market have a bearing on product quality. This chapter suggests that claims relating to product quality can best be resolved by underscoring loyalty. Product quality, viewed from this perspective, provides a framework for assessing the behavior of digital platforms while at the same time legitimizing the manner in which zero-price markets operate. The issue is most prominent with regard to search engine rankings, privacy, and the sale of goods in online marketplaces.
The area where social media has undoubtedly been most actively regulated is in their data and privacy practices. While no serious critic has proposed a flat ban on data collection and use (since that would destroy the algorithms that drive social media), a number of important jurisdictions including the European Union and California have imposed important restrictions on how websites (including social media) collect, process, and disclose data. Some privacy regulations are clearly justified, but insofar as data privacy laws become so strict as to threaten advertising-driven business models, the result will be that social media (and search and many other basic internet features) will stop being free, to the detriment of most users. In addition, privacy laws (and related rules such as the “right to be forgotten”) by definition restrict the flow of information, and so burden free expression. Sometimes that burden is justified, but especially when applied to information about public figures, suppressing unfavorable information undermines democracy. The chapter concludes by arguing that one area where stricter regulation is needed is protecting children’s data.
This brief introduction argues that the current, swirling debates over the ills of social media are largely a reflection of larger forces in our society. Social media is accused of creating political polarization, yet polarization long predates social media and pervades every aspect of our society. Social media is accused of a liberal bias and “wokeness”; but in fact, conservative commentators accuse every major institution of our society, including academia, the press, and corporate America, of the same sin. Social media is said to be causing psychological harm to young people, especially young women. But our society’s tendency to impose image-consciousness on girls and young women, and to sexualize girls at ever younger ages, pervades not just social but also mainstream media, the clothing industry, and our culture more generally. And as with polarization, this phenomenon long predates the advent of social media. In short, the supposed ills of social media are in fact the ills of our broader culture. It is just that the pervasiveness of social media makes it the primary mirror in which we see ourselves; and apparently, we do not much like what we see.
Killing the Messenger is a highly readable survey of the current political and legal wars over social media platforms. The book carefully parses attacks against social media coming from both the political left and right to demonstrate how most of these critiques are overblown or without empirical support. The work analyzes regulations directed at social media in the United States and European Union, including efforts to amend Section 230 of the Communications Decency Act. It argues that many of these proposals not only raise serious free-speech concerns, but also likely have unintended and perverse public policy consequences. Killing the Messenger concludes by identifying specific regulations of social media that are justified by serious, demonstrated harms, and that can be implemented without jeopardizing the profoundly democratizing impact social media platforms have had on public discourse. This title is also available as open access on Cambridge Core.
Data Rights in Transition maps the development of data rights that formed and reformed in response to the socio-technical transformations of the postwar twentieth century. The authors situate these rights, with their early pragmatic emphasis on fair information processing, as different from and less symbolically powerful than utopian human rights of older centuries. They argue that, if an essential role of human rights is 'to capture the world's imagination', the next generation of data rights needs to come closer to realising that vision – even while maintaining their pragmatic focus on effectiveness. After a brief introduction, the sections that follow focus on socio-technical transformations, emergence of the right to data protection, and new and emerging rights such as the right to be forgotten and the right not to be subject to automated decision-making, along with new mechanisms of governance and enforcement.
Generative AI has catapulted into the legal debate through the popular applications ChatGPT, Bard, Dall-E, and others. While the predominant focus has hitherto centred on issues of copyright infringement and regulatory strategies, particularly within the ambit of the AI Act, it is imperative to acknowledge that generative AI also engenders substantial tension with data protection laws. The example of generative AI puts a finger on the sore spot of the contentious relationship between data protection law and machine learning built on the unresolved conflict between the protection of individuals, rooted in fundamental data protection rights and the massive amounts of data required for machine learning, which renders data processing nearly universal. In the case of LLMs, which scrape nearly the whole internet, this training inevitably relies on and possibly even creates personal data under the GDPR. This tension manifests across multiple dimensions, encompassing data subjects’ rights, the foundational principles of data protection, and the fundamental categories of data protection. Drawing on ongoing investigations by data protection authorities in Europe, this paper undertakes a comprehensive analysis of the intricate interplay between generative AI and data protection within the European legal framework.
There is a conflict in law and in journalism ethics regarding the appropriateness of truthful but scandalous information: What should be published and what should be edited out? In the past, judges routinely gave the press the right to make such determinations and often sided with journalists even in surprising situations in which the privacy of the individual seemed clear. In modern internet times, however, some courts are more willing to side with the privacy of individuals over First Amendment press freedoms – and the case brought by professional wrestler Hulk Hogan against the Gawker website for publishing his sex tape without permission is one example. This chapter uses that scenario to explore the clash between an individual’s privacy rights and the rights of the press to decide what is news.
Since the 1990s, big data has rapidly grown, influencing business, government, and healthcare. Fueled by networked devices, social media, and affordable cloud storage, it features voluminous datasets with diverse types, rapid updates, and accuracy concerns. Applications span retail, manufacturing, transportation, finance, and education, yielding benefits like data-driven decisions, optimization, personalized marketing, scientific progress, and fraud detection. However, challenges arise from complexity, necessitating interdisciplinary collaboration, privacy issues, potential cyberattacks, and the need for robust data protections. Accurate interpretation is crucial, given the risk of costly misinterpretations. Moreover, significant resources for storage, processing, and analysis raise environmental concerns, while legal and ethical considerations add complexity. Overreliance on data may lead to missed opportunities, underscoring the importance of balancing insights with human judgment. In conclusion, big data offers immense potential but poses significant challenges. Navigating this landscape requires a nuanced understanding, fostering responsible data practices to unlock its potential for informed decision-making and advancements across diverse fields.
This chapter deals with the relationship between digital monies and basic societal values such as privacy and individual freedom. Threats to privacy and related concerns have risen in the digital age. Information technologies allow companies and governments to collect, store, maintain, and disseminate information on all dimensions of individual and collective life. Privacy is a basic human need defended by legislations and constitutions worldwide. Privacy helps explaining the attractiveness of cash. Some of today’s commercial applications of information technology imply intrusions into the personal sphere. Societal concerns about anonymity, because it facilitates unlawful and criminal activity, must also be taken into consideration, but there are reasons why some privacy of monetary transactions should be preserved, and cash is uniquely suited for that. Another question concerns freedom to choose the money. This idea was proposed originally by the so-called Austrian school of thought. Followers of the school of thought associated with Friedrich von Hayek argued that currencies should compete with one another. That school however underestimated important objections; first and foremost is the collective interest ingredient of a well-functioning money, which makes private competition ill-suited as means for promoting good monies. The chapter concludes explaining why some of these objections apply to crypto assets as well.
This chapter deals with cash (banknotes and coins), the oldest and most traditional form of money in existence. Cash involves a paradox: On the one hand, it is technologically less advanced that modern means of payments like cards and apps, so one could presume that it should decline in use and eventually disappear. On the other, however, evidence for almost the whole world shows that the demand for cash is increasing, although it is used less frequently for certain types of transactions like online commerce, retail stores, and restaurants. Criminal activities may explain part of the puzzle, but not much. One advantage of cash is that it can be seen and touched, therefore appealing to the senses and conveying a sense of security. Another is that it ensures absolute privacy of transactions. Other important characteristics explaining the popularity of cash are that it is simple (it requires no technology or complication whatsoever); definitive (it instantly settles any financial obligation); private and personal (it appeals to the desire of confidentiality); and self-sufficient (it does not depend on any other infrastructure functioning). We conclude therefore that physical cash is a useful complement of a robust and diversified monetary system, in which digital means of payments gradually prevail.
Jewish experiences, from life in cramped Judenhäuser always subject to Gestapo violence, to the suffering of individuals and families in a variety of ghettos in eastern Europe, are discussed. This includes the geographies of the Holocaust, house committees and activities within and outside ghetto walls, and also communal organizations, economic activities, self-help, and familial strategies.
A potent investigative instrument in the fight against highly sophisticated criminal schemes camouflaged by layers of secrecy is the sting operation. However, its application provokes crucial questions of legality and admissibility. Additionally, lack of legal provisions governing sting operations in India has resulted in conflicting judicial stances, calling for clarity on this issue. Hence, this paper examines the intricate legal and ethical challenges surrounding sting operations, which, on one hand, aid in uncovering serious offences and foster public interest but, on the other hand, threaten to infringe privacy rights and fairness of trials. The paper analyses international practices in Canada and the United States of America, alongside judicial precedents and scholarly opinions in India, and recommends statutory inclusion of sting operations in the Indian legal system. The paper proposes stringent judicial control, elaborate ethical guidelines to avoid staging crimes, and regulations on media reporting to maintain the delicate balance of public interest versus personal rights. The paper concludes with a model draft for legislative reform that seeks to strengthen the idea of justice without weakening fundamental rights.
This chapter scrutinizes the operation of public sector privacy and data protection laws in relation to AI data in the United States, the United Kingdom and Australia, to assess the potential for utilizing these laws to challenge automated government decision-making. Government decision-making in individual cases will almost inevitably involve the collection, use, or storage of personal information, and may also involve drawing inferences from data already collected. At the same time increased usage of automated decision-making encourages the large-scale collection and mining of personal data. Privacy and data protection laws provide a useful chokepoint for limiting discrimination and other harms that arise from misuses of personal information.
The emergence of “FemTech”, a term used to describe technologically based or enabled applications serving women’s health needs, as a driver of capital investment in the past decade, is a notable development in advancing women’s health. Critics have raised important concerns regarding the pitfalls of FemTech, with privacy concerns being chief among them. This private market, however, should be integrated into creation of systemwide corrections of problems that plague women of color. To do so a derivate FemTech framework (hereinafter the “Framework”) clear limitations must concurrently be overcome to realize its possibilities.
Over the years, businesses have been trying to identify ways to segment their customer base and engage in price discrimination. The objective is to provide different prices to different consumers based on a range of factors, such as age, location, income, and other demographic characteristics, which are considered capable of revealing the reserve price of buyers. With the increasing use of digital technology, this practice has become even more accurate and sophisticated, leading to the emergence of personalized pricing. This pricing approach utilizes advanced algorithms and data analytics to approximate the exact willingness to pay of each purchaser with greater precision.
In an unprecedented ruling, in 2018, the Brazilian Consumer Protection Authority applied a fine to a popular online travel company named Decolar.com for allegedly favouring foreign consumers over Brazilian residents during the 2016 Olympics held in Rio de Janeiro. The accusation was that Decolar.com had offered hotel reservations at different prices according to the consumer’s location as identified through their internet protocol address, or IP address.
To our knowledge, this is the only case thus far in Brazil that reviewed the practice of charging different prices from different consumers based on their specific characteristics.
Personalized pricing is a form of pricing where different customers are charged different prices for the same product depending on their ability to pay, based on the information that the trader holds of a potential customer. Pricing plays a relevant role in the decision-making process by the consumers, and a firm’s performance can be determined by the ability of the business entities to execute a pricing strategy accordingly. Further, pricing also determines the quality, value, and willingness to buy. Usually the willingness of a consumer depends on transparency and fairness.
Technological developments have enabled online sellers to personalize prices of the goods and services.
As the personalization of e-commerce transactions continues to intensify, the law and policy implications of algorithmic personalized pricing (APP) should be top of mind for regulators. Price is often the single most important term of consumer transactions. APP is a form of online discriminatory pricing practice whereby suppliers set prices based on consumers’ personal information with the objective of getting as close as possible to their maximum willingness to pay. As such, APP raises issues of competition, privacy, personal data protection, contract, consumer protection, and anti-discrimination law.
This book chapter looks at the legality of APP from a Canadian perspective in competition, commercial consumer law, and personal data protection law.
Lay people often are misinformed about what is a secure password, what are the various types of security threats to passwords or password-protected resources, and the risks of certain compromising practices such as reusing passwords and required password expiration. Expert knowledge about password security has evolved considerably over time, but on many points, research supports general agreement among experts about best practices. Remarkably, though perhaps not surprisingly, there is a sizable gap between what experts agree on and what lay people believe and do. The knowledge gap might exist and persist because of intermediaries, namely professionals and practitioners as well as technological interfaces such as password meters and composition rules. In this chapter, we identify knowledge commons governance dilemmas that arise within and between different communities (expert, professional, lay) and examine implications for other everyday misinformation problems.