Skip to main content Accessibility help
×
Hostname: page-component-54dcc4c588-ff9ft Total loading time: 0 Render date: 2025-10-04T17:15:15.670Z Has data issue: false hasContentIssue false

4 - Social Media Platforms as Common Carriers

Published online by Cambridge University Press:  05 September 2025

Ashutosh Bhagwat
Affiliation:
University of California, Davis

Summary

As Chapter 1 discusses, one of the most consistent conservative critiques of social media platforms is that social media is biased against conservative content. A common policy proposal to address this is to regulate such platforms as common carriers. Doing so would require social media platforms to host, on a nondiscriminatory basis, all legal user content and to permit all users to access platforms on equal terms. While this seems an attractive idea – after all, who could object to nondiscrimination – it is not. For one thing, the Supreme Court has now recognized that social media platforms possess "editorial rights" under the First Amendment to control what content they carry, block, and emphasize in their feeds. So, regulating platforms as common carriers, as Texas and Florida have sought to do, is unconstitutional. It is also a terrible idea. Requiring platforms to carry all content on a nondiscriminatory basis, even if limited to legal content (which it would be hard to do) would flood user feeds with such lawful-but-awful content as pornography, hate speech, and terrorist propaganda. This in turn would destroy social media as a usable medium, to the detriment of everyone.

Information

Type
Chapter
Information
Killing the Messenger
The War on Social Media
, pp. 62 - 91
Publisher: Cambridge University Press
Print publication year: 2025
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC 4.0 https://creativecommons.org/cclicenses/

4 Social Media Platforms as Common Carriers

At the heart of the conservative attack on social media sits a basic premise: Because social media is primarily a technology designed to permit individuals in society to communicate with each other, social media platforms should facilitate that communication without preferentialism or interference. In other words, social media platforms should operate like telephone or telegraph companies, or like the postal service, transmitting all users’ messages to each other without in any way altering them. It was this model that Justice Clarence Thomas was defending when, as discussed in Chapter 1, he advocated regulating social media platforms as “common carriers.” And it was also this model that explicitly underpins the Florida and Texas laws, also discussed in Chapter 1, that seek to regulate social media content moderation practices. But is this the correct way to think of social media? Or, in the alternative, are social media platforms more analogous to traditional media entities such as newspapers, who have long been recognized to have First Amendment rights to control the content that they provide? These are the questions this chapter addresses.

As it turns out, a majority of the U.S. Supreme Court has now provided some fairly clear answers to these questions. In the Moody v. NetChoice and NetChoice v. Paxton cases decided on July 1, 2024 (henceforth called NetChoice),Footnote 1 six justices of the Supreme Court agreed that social media platforms possess editorial rights under the First Amendment, analogous to the rights that earlier cases granted traditional media such as newspapers and cable television operators (and, weirdly, organizers of parades).Footnote 2 And, furthermore, five justices, still a clear majority, went on to specify in detail that with respect to the “feed” feature of social media platforms – focusing in particular on Facebook’s News Feed and YouTube’s homepage – those editorial rights encompass decisions such as what content to carry, what content to block, and what content to amplify.Footnote 3 Finally, that same majority also flatly rejected Texas’s argument that the State of Texas had a legitimate interest in regulating social media platforms in order to correct their alleged anti-conservative bias.Footnote 4 Justice Samuel Alito, joined by (unsurprisingly) Justice Thomas and Justice Neil Gorsuch, wrote a separate opinion flatly rejecting all aspects of the majority’s analysis regarding editorial rightsFootnote 5 – but of course three votes is substantially short of a majority of the Supreme Court.

Ultimately, the Supreme Court did not finally resolve the legal issues raised in the NetChoice cases – whether Florida’s S.B. 7072 and Texas’s HB 20 violated the First Amendment – because all nine justices agreed that both the lower courts in these cases had failed to properly resolve a complex, preliminary procedural issue.Footnote 6 Nor did the justices in the majority directly address the issue of whether social media platforms should be considered common carriers, an argument that Justice Alito (citing Justice Thomas’s separate opinion discussed in Chapter 1) did raise.Footnote 7 But by squarely recognizing that platforms did enjoy First Amendment editorial rights, the majority quite clearly, if implicitly, rejected that argument, at least as to social media “feeds.” In this chapter we will explore the roots of the argument over common carrier status versus editorial rights for social media platforms. We will also look at the implications of the NetChoice decision for future efforts to regulate social media, and why it is so important that the Court got the outcome right.

4.1 Are Social Media Platforms Common Carriers?

As noted in Chapter 1, one of the most consistent critiques of social media platforms from the political right is the claim that social media firms are biased against conservative content, and unfairly single out such content for content moderation. According to the critics, such bias has taken the form of disproportionately blocking and/or labeling conservative content, secretly deprioritizing such content, and most famously, deplatforming conservative users including notably President Donald Trump (though as also noted in Chapter 1, the empirical evidence that such bias exists is weak). It was in response to these concerns that Justice Thomas suggested that social media platforms might qualify as common carriers, which in turn led the States of Florida and Texas to enact the legislation at issue in the NetChoice litigation. To understand the theory that drives these actions, we must begin by taking a bit of a deep dive into the concept of common carriage.

Let us begin with the foundational question of what, historically and legally, is common carriage – which is to say, what characteristics of particular services have led to them being classified as common carriers subject to extensive legal restrictions. Common carriage, as Justice Thomas pointed out, is an old concept, traceable to the English common law. At its heart, common carriage required certain forms of transportation businesses, as well as related professions such as innkeepers and warehousers, to serve customers on a nondiscriminatory basis (the common law also imposed liability on such businesses for negligence, but that is less relevant to our story).Footnote 8 This principle appears to have emerged from much earlier (medieval) law requiring all tradesmen who engaged in a “common calling” to serve the public without discrimination.Footnote 9 Regardless, however, long before the American Revolution, the common law had evolved to focus squarely on certain specific professions associated with transportation and travel.

That stability was challenged, unsurprisingly, by the technological revolutions of the nineteenth and twentieth centuries. The first challenge was railroads, which were in the transportation business but of course had no precise, common law analogue. Congress resolved that issue by designating railroads as common carriers in the Interstate Commerce Act of 1887.Footnote 10 Meanwhile, the telephone was invented (in 1876), and the question emerged whether this new industry should also have common carrier status. Courts originally split on this issue, but Congress resolved it by classifying telephone companies as common carriers in 1910, a designation it confirmed in the Federal Communications Act of 1934 (the foundational statute establishing the framework for federal regulation of the telecommunications and broadcasting industries).Footnote 11

The preceding discussion describes how common carrier regulation evolved to cover modern transportation and communications technologies, but it tells us little about what it was, precisely, that led judges and regulators to designate certain industries, but not others, as common carriers – because it is simply not true that all modern communications technologies have been treated as common carriers. The most important counterexamples in this regard are cable television operatorsFootnote 12 and television broadcasters,Footnote 13 both of which the courts have explicitly held are not common carriers.

Furthermore, specifically in the telecommunications field (which of course encompasses the internet), the statutory definition of common carrier – “[A]ny person engaged as a common carrier for hire, in interstate or foreign communication by wire or radio”Footnote 14 – is notably unhelpful. In an attempt to clarify this muddle, Justice Thomas claims to identify a number of considerations that scholars and courts have associated with common carrier status: market or monopoly power, whether one holds oneself out as serving the public, whether the business is “affected with the public interest,” whether the service is in the “transportation or communications industries,” and whether the business has received “special government favors.”Footnote 15 Thomas also argues that modern social media platforms share all of these characteristics.Footnote 16

However, Professor Christopher Yoo of the University of Pennsylvania has argued convincingly that most of the considerations Justice Thomas identifies have little historical basis. Monopoly power, for example, was not historically either sufficient (as demonstrated by Standard Oil) or necessary (as demonstrated by inns in large cities) for common carrier status.Footnote 17 As for being “affected with the public interest,” the Supreme Court has recognized since 1934 that this phrase does not identify any particular category of businesses.Footnote 18 Similarly, a bland statement that “transportation and communications” businesses have tended to be common carriers evades the questions of why that is so, and why it is that some, but not all, such services are treated as common carriers – an obviously relevant question when evaluating digital platforms.Footnote 19 Finally, regarding “special government favors,” while it is true that common carrier status has often historically been accompanied by franchises, sometimes granting legal monopolies or limitations on liability, it is simply not true that a franchise or license inevitably results in common carrier status even in communications industries – the obvious counterexamples being cable television operatorsFootnote 20 and television broadcasters.

That leaves “holding out as serving the entire public.” Professor Yoo convincingly argues that, as a historical matter, this is probably the most widely accepted definition of a common carrier.Footnote 21 This approach to common carriage is also consistent with the approach to this issue taken by the United States Court of Appeals for the District of Columbia Circuit (known as the DC Circuit), the most important regulatory court in the United States. In a case known as NARUC I, the court stated that “to be a common carrier one must hold oneself out indiscriminately to the clientele,”Footnote 22 or alternatively that “the carrier ‘undertakes to carry all people indifferently.’”Footnote 23 In a later case with the same name (but different subject matter), NARUC II, the court reiterated this definition while clarifying that it was crucial to common carriage that the carrier transmit information of the customer’s own choosing, not that of the carrier’s.Footnote 24

It should be noted, however, that to identify the “holding out” approach as the dominant historical and regulatory definition of common carriage is to open up a host of very difficult questions. For one thing, this definition appears to leave firms with an easy option to avoid common carriage designation by simply announcing that they do not serve the general public – but surely Congress did not intend telephone companies to avoid regulation through such a simple ploy.Footnote 25 In addition, it should be obvious that a simple willingness to serve the general public does not convert a firm into a common carrier because if that were so, Walmart would be a common carrier. Something more is clearly required – and that something is “carriage,” meaning (as NARUC II indicates) a willingness to carry goods or messages chosen by the customer to the customer’s chosen destination without interference.

This discussion of the development and definition of common carrier status goes a long way toward explaining why social media platforms such as Facebook and Twitter/X do not conceivably fit within that category, even if Justice Thomas’s definition were correct. Indeed, the question is not even a close one.

Starting with the obvious, there is no question that Facebook, with its almost two billion active daily users,Footnote 26 possesses some degree of market power, as Justice Thomas argues in his Knight concurrence.Footnote 27 But its market share, and profits, have been stagnating or declining in recent years because of the rise, as Mark Zuckerberg the CEO of Meta (the owner of Facebook and Instagram) acknowledges, of rival platforms such as TikTok.Footnote 28 As such, Facebook hardly constitutes the sort of unavoidable essential facility such as a local landline telephone company (before the rise of cellular telephony) or monopoly railroad facilitiesFootnote 29 that have traditionally been classified as common carriers under the monopoly theory of common carriage (which in any event, as discussed earlier, is a weak one). And Twitter/X, which since Elon Musk bought the platform has seen its daily active users collapse from 229 million daily active users to 174 million daily active users in February of 2024,Footnote 30 is even less credibly described as a monopoly of that nature – as demonstrated by the fact that, when deplatformed by Twitter/X, President Trump created his own, competing platform, Truth Social. Yet it was undoubtedly Twitter/X’s deplatforming of Donald Trump that triggered Justice Thomas’s judicial and Florida and Texas’s legislative attacks on social media, given that Twitter/X was Trump’s primary medium of communication to his followers (as well as being the subject matter of the litigation which generated Justice Thomas’s call for common carriage regulation).Footnote 31

Indeed, the very existence of four or five, if one counts Facebook and Instagram separately despite their common ownership, very large social media platforms (Facebook, Twitter/X, Instagram, YouTube, and TikTok) in the United States aloneFootnote 32 belies the notion that any one of them is a monopoly essential facility. And finally, the fact that Trump continues to post on his new social media platform, Truth Social, also demonstrates beyond doubt that Twitter/X, or for that matter Facebook, are not the sorts of non-bypassable networks or services that have historically triggered common carrier treatment.

Aside from market power, the factors Justice Thomas identifies as relevant to common carrier status are whether the business “holds itself out as open to the public,” is “of public interest,” is in the transportation or communications sectors, or has received “special government favors.”Footnote 33 But Justice Thomas himself concedes that “of public interest” is a meaningless standard.Footnote 34 And as for the fact that social media platforms are in the communications sector, no one seriously believes that all communications companies are common carriers. After all, all media companies – including newspapers such as the New York Times and cable channels such as Fox News – are involved in “communications” but, everyone appears to agree, cannot be subjected to common carriage regulation. And as also noted earlier, the Supreme Court has specifically rejected common carrier status for television broadcasters and cable television operators, both undoubtedly in the “communications” business. In other words, being in the transportation or communications sectors is neither necessary (see inns) nor sufficient (see cable and broadcasting) to be classified as a common carrier.

That leaves “government favors” and “holding out.” Let us begin with the latter because, as discussed earlier, it is the most plausible candidate for the traditional definition of common carriers. But again, obviously not all businesses that serve the public indiscriminately, such as Walmart and Denny’s, are common carriers. Even within “communications” companies, being open to the public generally (as the Fox News website is) obviously cannot suffice. This is the insight underlying the DC Circuit’s analysis in NARUC II, according to which the key to common carrier status is that customers of the communications service at issue communicate content of their own choice and to their own destination of choice. Without that indifference to content on the part of the communications service, common carriage is a nonstarter.

But now consider the absurdity of the argument that social media platforms are common carriers. Justice Thomas and the States of Florida and Texas object to social media platforms because they (allegedly) systematically “discriminate against” (i.e., refuse to carry) certain conservative content and refuse to serve certain conservative customers (in particular, President Trump). Furthermore, conservative voices object that social media firms choose to amplify certain content that platforms favor, while deemphasizing other, disfavored (i.e., conservative) content. In other words, the conservative argument is that social media platforms are or should be common carriers because they do precisely what a common carrier does not, which is having the service itself decide what content to carry, where to send it, and what to emphasize. In short, the Thomas/Florida/Texas argument is that social media platforms are common carriers because they are not common carriers. To quote the famous Supreme Court Justice Robert Jackson from a very different context, himself quoting Mark Twain, “The more you explain it, the more I don’t understand it.”Footnote 35

Finally, we should briefly consider the argument that platforms are common carriers because they have received “special government favors.” It is certainly true that traditional common carriers such as railroads and telephone companies were often granted special franchises or licenses, often with monopoly status, or special governmental powers such as eminent domain (the power to take private property without the owner’s consent)Footnote 36 – but obviously none of that has any relevance to social media platforms. So in what sense do such platforms receive special “favors”? Justice Thomas does not himself much elaborate on this argument, but an article he cites by Professor Adam Candeub of Michigan State University does. Professor Candeub argues that, historically, what appears to define common carriage “is a bargain that gives special liability breaks in return for the carrier refraining from using some market power to further some public good.”Footnote 37 And with respect to social media platforms, Candeub argues that the common carrier “bargain” can be found in Section 230 of the Communications Decency Act, a statute which limits platform liability for third-party content.Footnote 38

Section 230, its meaning, and its role in the social media wars is the topic of Chapter 6 of this book. Briefly, however, Section 230, which was enacted by Congress in 1996 (and has been called “the twenty-six words that created the internet”Footnote 39), has two crucial provisions. The first, Section 230(c)(1), provides that internet providers who host third-party content are not legally liable for harms caused by that content. And the second, Section 230(c)(2)(A), similarly provides that that such platforms cannot be held liable for actions “taken in good faith” to restrict access to harmful content (i.e., for content moderation) even if the moderated content is constitutionally protected. Disputes over the actual meaning of these provisions, their effect, and their wisdom are myriad, and as I said will be taken up in Chapter 6. But for our purposes the question is, assuming that Section 230 grants platforms almost complete immunity for third-party content and for good-faith content moderation, would it then be reasonable for Congress to impose common carriage on platforms as a quid pro quo?

The answer is that it would not, because such a supposed “bargain” creates a fundamental and irreconcilable contradiction. The problem is this: Common carriage is a legal regime whereby platforms would be required to carry any and all legal content. Its very purpose is to eliminate content moderation. But the basic purpose of Section 230(c)(2) was and is to encourage content moderation, in order to prevent the internet and platforms from degenerating into sewage (on which more later in this chapter and in Chapter 6). In particular, Section 230 permits, and indeed encourages, platforms to block content that they, in good faith, believe is highly offensive, even if legal. But the whole point of common carriage regulation as proposed by Justice Thomas and Professor Candeub – to prevent platforms from selectively blocking legal content – is the conduct that Congress, by enacting Section 230(c)(2), intended to encourage and protect. In other words, this particular “bargain,” Section 230 immunity in exchange for common carriage status, is not just implausible but incoherent.

In short, there is simply no plausible argument that social media platforms are or should be considered analogous to historical common carriers. They bear essentially no similarities to such carriers (other than engaging in “communications”), and certainly do not function as carriers of user-selected content, indifferent to content themselves, the thing that characterizes traditional common carriers such as telephone companies.Footnote 40

4.2 Social Media, Editorial Rights, and the NetChoice Cases

The alternative model from common carriage for social media platforms would be to analogize social media platforms to traditional media such as newspapers and cable television operators, and so protect their First Amendment editorial rights to control what content they carry, who to present it to, and what parts of it to emphasize. The Supreme Court, to a substantial extent, endorsed this model in the NetChoice cases; but it also left open important questions. I will begin by summarizing what the Supreme Court actually said in NetChoice, and then take a step back to explore broader issues regarding the nature of editorial rights and their application to social media platforms. I will also suggest answers to some specific questions regarding how laws can restrict the editorial choices of platforms, which the NetChoice Court did not address.

The NetChoice litigation arose when two trade associations for tech firms (we can call them NetChoice collectively), whose members include Facebook and YouTube, challenged the constitutionality of the Florida and Texas statutes (S.B. 7072 and HB 20) regulating social media content moderation practices, which are described in Chapter 1. As briefly noted earlier, the Supreme Court did not fully resolve the constitutionality of either law, because all nine justices agreed that both lower courts had misapplied the procedural rules regarding so-called facial challenges to statutes, and so remanded the case to those courts. Along the way, however, a five to six member majority of the justices provided important guidance on how, on remand, the lower courts should apply the First Amendment to platform content moderation practices. And it is this part of the opinion that is our focus.

The crucial and fundamental legal issue underlying the NetChoice cases was whether the First Amendment granted any constitutional protection to content moderation decisions made by social media platforms. And on that basic question, the lower courts in this litigation took polar opposite positions. One, the United States Court of Appeals for the Eleventh Circuit, held that the First Amendment did protect platforms’ “editorial discretion,” and so invalidated the key provisions of the Florida statute it was reviewing.Footnote 41 The other, the United States Court of Appeals for the Fifth Circuit, concluded that platform content moderation practices had no expressive component at all, and so fell completely outside the First Amendment. As a result, the Fifth Circuit upheld the Texas statute in full.Footnote 42 When confronted with this disagreement, the Supreme Court sided firmly with the Eleventh Circuit, describing the Fifth Circuit’s reasoning as “rest[ing] on a serious misunderstanding of First Amendment precedent and principle,” and as being simply “wrong.”Footnote 43

The most important part of Justice Elena Kagan’s majority opinion addressing this issue was joined by six justices (everyone but Justices Thomas, Alito, and Gorsuch). The Court begins by analyzing the key Supreme Court precedents relevant to the issue of editorial rights: Miami Herald Publishing Co. v. Tornillo,Footnote 44 which held that the First Amendment protected newspapers’ “exercise of editorial control and judgment”; Pacific Gas & Elec. Co. v. Public Util. Comm’n of Cal.,Footnote 45 which held that regulators could not force a utility company to include materials it disagreed with in its billing envelopes; Turner Broadcasting System, Inc. v. FCC,Footnote 46 which held that cable television operators’ decisions regarding what channels to carry implicated their First Amendment right of “editorial discretion”; and Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc.,Footnote 47 which held that private organizers of a St. Patrick’s Day parade had a First Amendment right to exclude groups whose message they did not agree with. (The Court also distinguished two cases in which the Court had not found First Amendment violations, on the grounds that the regulated parties in those cases were not engaging in any expressive activity.)Footnote 48

Based on its analysis of these cases, the Court derived three critical principles. First, “the First Amendment offers protection when an entity engaging in expressive activity, including compiling and curating others’ speech, is directed to accommodate messages it would prefer to exclude … [a]nd that is as true when the content comes from third parties as when it does not.” Second, “none of that changes just because a compiler includes most items and excludes just a few.” And third, “the government cannot get its way just by asserting an interest in improving, or better balancing, the marketplace of ideas.”Footnote 49 Note that the last two principles largely resolve, and reject, the key arguments in favor of the Florida and Texas laws: that they did not meaningfully interfere with platform rights because platforms carry most third-party content without objection or change; and that regulation is necessary to cure platforms’ anti-conservative bias.

The rest of Justice Kagan’s majority opinion, this time on behalf of five justices (so still a majority), considered how these principles applied to social media platforms (Justice Ketanji Brown Jackson, the Court’s newest member at the time, thought it unnecessary to get into those detailsFootnote 50). In particular, the Court focused on their application to social media “feeds,” including Facebook’s News Feed and YouTube’s homepage, because those are the issues that the lower courts had focused on. The Court begins by describing in some detail how platforms moderate content on their feeds via algorithmic prioritization of chosen content, attaching warnings to some content, and completely blocking content deemed particularly harmful.Footnote 51 And the Court then unequivocally concluded that all of these activities are protected by the First Amendment because, like traditional media, social media platforms “create a distinctive expressive offering” via their content moderation practices.Footnote 52

Nor did the Court stop there. It went on to emphasize that it was irrelevant that platforms do not moderate the lion’s share of content, and so are not tightly controlling the messages they convey. As with the parade organizers in Hurley, the fact that platforms were not seeking to express a “particularized message” did not mean they gave up “their right to reject the few messages they found harmful or offensive.”Footnote 53 The Court also recognized that the right at issue in these cases was not vitiated because no one was likely to attribute specific user content to the platforms themselves, because the expressive choices being protected here are deciding what content to include, and how to display and organize it.Footnote 54 Finally, the Court (following the third “principle” described earlier) flatly rejected the idea that the government could override platforms’ First Amendment rights based on a purported state interest in curing platforms’ “silencing” of conservative viewpoints. It held, crucially, that the state had no legitimate interest in or power to create “greater balance in the marketplace of ideas” or to “chang[e] the balance of speech on the major platforms’ feeds.”Footnote 55

As noted earlier, for procedural reasons the Court did not ultimately resolve the constitutionality of either the Florida or Texas statutes. And in fact, it may well be that certain services provided by platforms, such as email and direct messaging, may legitimately be regulated because those functions do resemble traditional common carriage. But given the Court’s analysis, it is crystal clear that the First Amendment fully protects the core content moderation functions platforms use to shape their user feeds, and Florida’s and Texas’s attempts to regulate those functions are flatly unconstitutional. As such, when a case arises that does clearly raise this question, quite possibly in the form of as-applied challenges by Facebook and YouTube to the Florida and Texas laws, they will certainly be invalidated. Furthermore, as much as Justices Thomas and Alito sought to avoid this conclusion by labeling the majority’s analysis “dictum”Footnote 56 or “superfluous,”Footnote 57 all sensible people understand that the writing is on the wall for the core applications of the Texas and Florida statutes.

Finally, it should be emphasized that the Supreme Court in NetChoice was not only clear in extending editorial rights to social media platforms but absolutely correct to do so for many of the same reasons that those platforms are not common carriers. Most fundamentally, the reason to grant social media platforms editorial rights is that unlike common carriers such as telephone companies and unlike Internet Service Providers (ISPs) such as Comcast, social media platforms are intentionally designed to provide a specific experience to users. While it is true that most of the content available on social media platforms is generated by third parties rather than the platforms themselves, social media is not a transparent conduit for speech such as a telephone system or ISPs. To the contrary, platforms famously moderate content extensively, making constant, value-based choices about what third-party content to permit on their platforms.Footnote 58 And they also ubiquitously employ algorithms that determine what content to show users, what content to emphasize, and what content to deemphasize. Furthermore, Facebook and other platform owners are constantly tweaking and making deliberate choices about how their algorithms should operate, both for business reasons and for ideological ones (sometimes in response to public pressure). Indeed, prominent commentary Tarleton Gillespie has convincingly said that content moderation “is, in many ways, the commodity that platforms offer.”Footnote 59 For platforms editorial discretion is thus, as with newspapers, a fundamental feature of their operations.

Not only are platforms factually more like traditional media than common carriers, basic free speech theory also supports granting social media platforms editorial rights. The reason we grant editorial rights to other media, such as newspapers and websites that provide their own content, is because we think public discourse is enhanced when publishers are able to present coherent, consistent products with consistent messages. Fox News is not CNN, and the Wall Street Journal editorial pages are not the same as those of the New York Times. Furthermore, we believe that this diversity of perspectives advances public debate despite some risk of ideological sorting (conservatives watching Fox News and reading the Wall Street Journal, liberals doing the same with CNN and the New York Times). Permitting the creation of such coherent and consistent messaging is the very purpose of First Amendment editorial rights because while debate across perspectives is of course a valuable part of public discourse and democracy, so too is discussion within ideological groups which permits them to develop (and sometimes share with the public) their own values and views.Footnote 60

Indeed, it would seem fundamental to the very concept of democratic citizenship that we must permit individuals to choose what information and perspectives to focus on. Or conversely, it is entirely inconsistent with our system of popular sovereignty and democratic self-governance to permit the State to choose what information is “appropriate for” or “beneficial to” citizens, and then force it upon them. We do not, after all, require liberals to watch Fox News, or conservatives to watch CNN, and could never do so consistent with the First Amendment. Yet imposing viewpoint neutrality, nondiscrimination, or common carrier requirements on social media platforms does precisely the same thing. It denies platforms the ability to create ideologically coherent packages of content, and so denies platform users the ability to select among such packages. Such regulation is at heart no different than legally requiring Fox News to provide airtime to Rachel Maddow or requiring CNN to provide time to Laura Ingraham – laws which presumably all agree, for good reason, would violate the First Amendment editorial rights of those news channels. To deny social media platforms, unquestionably the new dominant media for political and social discourse, the same freedom makes little sense.

If anything, the fact that modern social media platforms rely on third-party rather than their own content strengthens rather than weakens the argument in favor of editorial autonomy. The starting, but widely shared, assumption here is that democratic self-governance relies on public discourse;Footnote 61 and further, that this discourse is enhanced when it is truly public, meaning open to participation by the public at large. While historically a partisan press (discussed further in the next chapter) permitted those few who had access to the press (i.e., political and social leaders) to create and shape groupings of citizens with shared values and perceived interests, social media permits citizens themselves to engage in discourse, both with leaders and among themselves, and so to participate in that creative, shaping process. Thus, the internet has democratized not just speech but also association and assembly.Footnote 62 Admittedly, granting platforms editorial rights leaves citizen groups at the mercy of the platform owners’ decisions to permit or deplatform such groups, including ideologically driven decisions;Footnote 63 but to deny platforms such rights would leave such groups at the mercy of government regulation that would inevitably also favor some groups over others, surely a worse outcome. And in any event platforms, unlike the government, do not monopolize power and so if a group is denied access to one platform (say Facebook), it can always migrate to another (say Parler, Truth Social, or Telegram). A group disfavored by the government would have no such exit option.

Finally, the argument some make, that because the major social media platforms today claim not to engage in ideologically based moderation they have no need for editorial rights, is wrong for three different reasons. First, it is irrelevant. Even if platforms such as Facebook have not engaged in ideologically based moderation, they still use their algorithms to control users’ experiences on their platform, making those experiences more engaging (and arguably more addictive, which is the source of much criticism of Facebook and Twitter/X). It is worth remembering in this context that the First Amendment protects entertainment as well as political and ideological speech, at least in part because of our inability to distinguish between the two.Footnote 64

Second, it is untrue. Social media platforms’ terms of service and other moderation rules are replete with ideological choices. The decisions by Facebook to ban hate speech, glorification of violence, electoral falsehoods, and even nudity are in fact ideological choices. To consider just nudity, the enormous struggles Facebook faced early in its existence over defining nudity and determining how to apply its prohibition to breastfeeding womenFootnote 65 or the famous Napalm Girl photographFootnote 66 illustrate the charged ideological questions that can arise in enforcing even seemingly simple rules. Moreover, social media firms’ willingness to engage in arguably ideological content moderation is evolving. Twitter/X started life as an “anything goes” platform,Footnote 67 but then rapidly moved to exercise extensive control over content,Footnote 68 before again relaxing those controls after Elon Musk’s purchase and rebranding (to “X”) of the platform.Footnote 69

Finally, it is a logical error to condition constitutional rights on their exercise. By that reasoning, only current gun owners would have Second Amendment rights – but that obviously cannot be the law. Similarly, a printer’s Ben Franklin-like commitment to generally publish all perspectivesFootnote 70 cannot mean that the printer has waived their right to reject content that is particularly objectionable in their (evolving) view. For the same reason, even if social media platforms today do not engage in ideological censorship,Footnote 71 that is no reason to believe that they have waived that right, given the extensive other moderation that they undoubtedly do engage in.

4.3 NetChoice and Legislative Imposition of Common Carriage

As noted earlier, the NetChoice majority never explicitly addressed the question of whether social media platforms may be regulated as common carriers. Furthermore, Justices Thomas and Alito both strongly suggested in their separate opinions that the issue remains open.Footnote 72 This, however, seems quite wrong. Despite never using the term “common carrier,” the majority’s analysis also clearly, albeit implicitly, rejects the argument that at least with respect to their “feed” functions, either Congress or state governments may impose common carrier obligations on social media platforms (though perhaps they can do so with respect to other, more common-carrier-like platform functions such as email and direct messaging). The reason for this is simple: The Supreme Court’s caselaw clearly establishes that legislatures cannot strip entities of First Amendment rights by fiat, simply by labeling them as “common carriers” or the related concept, “places of public accommodation.”

That, in fact, is precisely what the State of Massachusetts attempted to do in the Hurley case cited by the NetChoice majority. In Hurley, a group of gay, lesbian, and bisexual individuals of Irish descent formed an organization named GLIB, which sought to participate in Boston’s annual St. Patrick’s Day parade in a way that would express their pride in their openly gay, lesbian, and bisexual identities as well as in their Irish heritage. After the organizers of the parade (a private group) denied their application, GLIB filed a lawsuit claiming that the denial violated a state law forbidding discrimination on account of sexual orientation by places of public accommodation.Footnote 73 Massachusetts state courts concluded that the parade constituted a place of public accommodation, that GLIB’s exclusion violated the antidiscrimination statute, and that application of the statute did not violate the parade organizers’ First Amendment rights. But the Supreme Court reversed the state court, holding that regardless of the parade’s designation under state law, the First Amendment prohibited the government from interfering with the parade organizers’ editorial choices regarding what third-party messages to include in their expressive activity.

Another case supporting the conclusion that applying the label “common carrier” or “place of public accommodation” does not eliminate First Amendment rights is Boy Scouts of America v. Dale.Footnote 74 The case involved a decision by a New Jersey Boy Scouts troop to revoke the adult membership of an assistant scoutmaster, James Dale, after discovering that Dale was gay. After Dale sued, the New Jersey Supreme Court held that the Boy Scouts were a place of public accommodation under state law, and that therefore the Scouts’ actions violated the state’s ban on discrimination on the basis of sexual orientation. The US Supreme Court held, however, citing Hurley, that this application of state public accommodation law violated the Boy Scouts’ First Amendment rights.Footnote 75 Like Hurley, the Dale decision thus clearly stands for the proposition that legislatures, and courts, cannot strip entities of First Amendment protections, including the right to exclude content or speakers they do not wish to associate with, simply by designating those entities as places of public accommodation. Furthermore, the Dale Court held that such legislative action is particularly suspect when a state extends the “places of public accommodation” designation well beyond entities such as “inns and trains” which were traditionally considered in that category.Footnote 76

The lesson from Hurley and Dale is clear: States (or Congress) cannot strip expressive entities or platforms of First Amendment rights simply by designating them as “common carriers” or “places of public accommodation.” Furthermore, this is especially true when the government attaches those labels to things that do not closely resemble the kinds of entities historically recognized as within those labels. But that is precisely what the states of Florida and Texas sought to do in S.B. 7072 and HB 20. What this discussion demonstrates is that Justices Thomas and Alito notwithstanding, legislative attempts to strip platforms of their core First Amendment editorial rights simply by labeling them as common carriers is clearly unconstitutional.

4.4 A Deeper Dive into Editorial Rights

The Supreme Court’s NetChoice decision is, for all these reasons, best read to recognize that social media platforms, at least in their core content moderation and presentation functions, enjoy First Amendment editorial rights, and may not, with respect to those same functions, be regulated as common carriers. What the Court did not do, however, was to explicate in any detail the nature of those editorial rights, or their limits. In this section we will explore how courts should resolve those questions when eventually they arise.

To understand the scope of editorial rights, we should first consider the source and nature of those rights. Historically, the core protection provided by the Speech and Press Clauses of the First Amendment was the right to express one’s own ideas, and to distribute them as widely as one chooses, free of governmental interference. In addition, since the 1943 flag salute case,Footnote 77 there has been a related right against the government compelling you to express an ideological message of the government’s choosing. Finally, as the NetChoice majority recognized, the Court has also recognized that owners of expressive platforms that communicate their own speech or the speech of others have a right to choose what to include and what not to include on their platforms.

These editorial rights are somewhat related to both the speech and compelled speech rights, but they are distinct, especially with respect to third-party content. Editorial rights are not a form of pure speech. When a platform carries third-party content, interference with editorial freedom does not involve suppression of the regulated platform’s own speech. Nor are editorial rights simply an aspect of compelled speech, for two separate reasons. First, one type of editorial right – the right to carry third-party speech that the government disapproves of – has nothing to do with compelled speech. Second, even when the claimed editorial right is to refuse to carry government-favored speech, pure compelled speech doctrine is a poor fit because, as NetChoice held, editorial rights apply even when it is highly unlikely that the speech at issue would be attributed to the regulated entity/platform owner.

For all of these reasons, editorial rights are best understood as a third, distinct right of free expression protected by either the Free Speech or (more plausibly) Free Press Clauses of the First Amendment. But what exactly are those rights? To begin with, a distinction must be drawn between positive and negative editorial rights – that is, between a right to include on one’s platform expression that the government disfavors, and a right to exclude information that the government would mandate. This distinction has obvious parallels to the distinction between the basic free speech right and the right against compelled speech; but, as noted earlier, the parallel is not exact.

Nonetheless, it may well be that positive editorial rights should receive stronger constitutional protections than negative editorial rights, just as the right to speak is more robust than the right not to speak.Footnote 78 This is because the expressive injury, and potential distortion of public discourse, caused by state restrictions on what content platforms are permitted to include are obvious and severe. Silencing speech they dislike is the quintessential way in which governments control and manipulate public discourse, to the severe detriment of democracy.

It is less obvious, however, that the distortion caused by forced inclusion of unwanted content is so severe – so long as, and this is crucial, the platform owner is permitted to prominently disassociate itself from the required content, and indicate that the content is government-mandated. Without such a right to disassociate, government mandates can seriously distort public discourse, because listeners/users will mistakenly attribute to platforms and other users views that are in fact the government’s, thereby giving them credibility they do not deserve. But if it is clear that mandated speech does originate with the government, then the public can judge it appropriately.

Moving on from simple government censorship and mandates, the NetChoice majority also clearly recognized that editorial rights also protect how to present content and (relatedly) what elements of that content to emphasize. With respect to the traditional media, this editorial right encompasses the decision to highlight some content on a newspaper front page or magazine cover, while burying other content inside the paper or magazine. With broadcast and cable television channels, this editorial power is most obviously exercised when programming is allocated “primetime” slots, while other programming is relegated to 2 a.m. With cable television operators, the decision on which channels to grant preferred (i.e., low) channel numbers is similarly an editorial one. With social media platforms such as Facebook and YouTube, the decision on what content to highlight in users’ feeds, and what content to deemphasize, is similarly an editorial one.

Finally, it is important to recognize that even when editorial rights exist, how the government interferes with those rights may well be constitutionally relevant because no rights, including First Amendment rights, are absolute. Regarding disfavored (but legal) content, for example, presumably a prohibition on carrying the content constitutes a greater First Amendment burden than, say, a requirement that the content be accompanied by a warning label. After all, labeling is something that platforms today do voluntarily all the time. Admittedly, government-mandated labeling is different, but so long as the label was clearly attributed to the government, the First Amendment burden (while real) seems less severe, suggesting that mandatory labeling may be permissible if (for reasons discussed further later) the government has a strong, objectively reasonable (i.e., non-ideological) reason for requiring it.

Similarly, if the government were to mandate that a platform carry particular content, for reasons already noted the harm of such a mandate would be mitigated (though not eliminated) so long as the platform can clearly state that the content is state-mandated, and disown it. Indeed, absent the ability to do so the violation of editorial rights merges with a compelled speech violation of the most egregious form and (by banning the platform disclaimer) a direct violation of the right to speak. So, even if some requirements to carry content might be constitutionally permissible (on which more later), that would be so only if platforms had the right to identify the content as government-mandated and to make clear that the platform does not endorse it.

4.5 Implications

In light of all of the earlier discussion, let us consider specifically what editorial rights platforms should enjoy, and which should be more (or less) robust than others. It seems clear in this regard that the strongest editorial right a platform must possess is the positive right to include any legal content it desires on its platform. For the state to interfere with this right not only directly interferes with the platform’s editorial control but also directly infringes on the free speech rights of the individual who posted the content. Since such regulations necessarily specify what content is forbidden, such regulatory intervention is presumptively unconstitutional (i.e., subject to strict scrutiny) under standard First Amendment doctrine, if challenged by the speaker. Even if the speaker does not assert their right to speak, however, a platform should similarly be able to assert its editorial rights in seeking to invalidate any such regulation. As we shall see, however, this simple fact dooms many regulatory proposals (primarily from the political left) directed at social media platforms.

On the other hand, while regulatory interference with negative editorial rights, by requiring inclusion of specified content, certainly remains constitutionally troubling, it might be defensible in specific circumstances. The problem with such inclusion requirements are twofold. First and foremost, as discussed earlier, such inclusion undermines a platform’s ability to create a coherent user experience; and concomitantly, it interferes with the ability of groups of users to develop shared beliefs and values, by interposing the state’s own preferred beliefs into the conversation. As such, forcing content onto platforms interferes with both editorial and associational values. Second, requiring inclusion of content has the potential to distort public discourse, by overemphasizing the preferred positions of the state at the expense of the views of the public as expressed in posts by users, a clear violation of the democratic principles that underlie the First Amendment. As James Madison put it, in a “Republican Government … the censorial power is in the people over the Government, and not in the Government over the people.”Footnote 79

For all of these reasons, there should generally exist a presumption against state-imposed inclusion of content onto platforms – and that indeed is what the NetChoice Court held. But that presumption need not be absolute, because as noted earlier, inclusion of content clearly has a less severe impact on both editorial integrity and public discourse than suppression of content. Furthermore, inclusion of government-mandated content on a platform constitutes less of an interference with First Amendment interests than with, say, newspapers, both because platforms are already primarily dedicated to hosting content generated and selected by third parties while exercising modest control; and because, unlike newspapers, the major online platforms do not have capacity constraints, so including government-mandated content does not require removing other content (though if a smaller platform did have capacity constraints, then government-mandated content might pose a more serious First Amendment burden).

Nonetheless, it seems clear that regulations that require platforms to carry content expressing the government’s own ideological preferences, or private ideologies that the government supports (as Florida and Texas sought to do), are out of bounds. Such rules create the greatest distortions of public discourse and seem to have no strong justification. The government, after all, remains free to circulate its preferred message using its own means of communication, rather than high-jacking privately owned ones. For this reason, a law that, for example, would require a social media company to display messages discouraging smoking/drug use/premarital sex or encouraging voting/gun ownership/exercise would be clearly unconstitutional.

On the other hand, requirements to carry non-ideological, factual content, even though it is chosen by the government, seem less problematic. Thus regulations that require platforms to prominently disclose their own content moderation practices, for example, are surely not terribly troubling so long as they do not impose a serious burden on platforms’ ability to engage in content moderation (whether they do is a disputed pointFootnote 80). And one could imagine a myriad of situations where governments may legitimately require the display of factual content, such as displaying the hours and locations of polling places near in time to an election, or displaying the locations of shelters during a natural disaster. Surely these kinds of mandates advance strong state interests while imposing little or no harm to editorial rights or public discourse, so long as the quantity of mandated content remains modest (modest because if mandates become onerous, they could crowd out platform- and user-favored contentFootnote 81). And again, given that platforms are in the business of displaying third-party content with few restrictions, requiring some additional, unobjectionable content seems a minor burden on their editorial rights.

To this point, we have considered interferences with platform editorial control that either prohibit, or require, specified content. The state, however, has a larger regulatory repertoire than that. Consider a hypothetical legal requirement that platforms label specific content as false, or a requirement that platforms post warning labels or links to trusted sources of factual information (as many already do voluntarily) when specific topics such as COVID-19 vaccines are the subject matter of a post. Notice that such requirements implicate both positive and negative editorial rights. They implicate positive rights because a platform’s decision to display specific user content triggers legal consequences. They implicate negative rights because the legal remedy is to force platforms to post content of the government’s choosing.

Even given that, however, it seems plain that labeling requirements are less intrusive on editorial discretion than flat bans because platforms remain free to post any material they wish, to control the prominence of those materials, and to disassociate themselves from any government-mandated label by captioning the label as imposed by the government. On the other hand, there is an obvious concern that regulatory authorities will select what content to target for labeling for ideological reasons, which would violate the cardinal rule against ideologically based infringements of negative editorial rights. As a consequence, at a minimum courts should approach labeling or linking requirements with a high degree of skepticism, and uphold them only if the government can prove that it is addressing a serious and urgent social problem, the information triggering the requirement is demonstrably factual and false, and the information contained in the mandated label or link is demonstrably factual and true.

4.6 Regulatory Proposals

Recognizing that platforms should and do possess robust First Amendment editorial rights, and specifying the nature of those rights, provides valuable tools to evaluate the sorts of regulatory proposals discussed in Chapters 1 and 2. Most fundamentally, for reasons already discussed, editorial rights are clearly inconsistent with some of the strongest regulatory proposals coming from conservative critics of social media, such as regulating platforms as common carriers, as Justice Thomas proposed, or requiring “fairness” or viewpoint neutrality in content regulation, as Texas sought to do.

But just as editorial rights are a formidable barrier to the mainlyFootnote 82 conservative proposals to require viewpoint neutrality on social media platforms, such rights also appear to doom most progressive proposals of the sort discussed in Chapter 2 to regulate social media. Examples of such proposals include pressure by Democratic State Attorneys General (backed by the threat of legal action) to force platforms to block more hate speech,Footnote 83 as well as legislative proposals and actions by Senator Amy Klobuchar and the State of California, discussed in Chapter 2, which target medical (especially COVID) mis- and disinformation.

The reason such proposals violate the First Amendment is, quite simply, that both hate speechFootnote 84 and falsehoodsFootnote 85 are fully protected under the First Amendment. And the First Amendment’s prohibition on the government suppressing protected speech based on its message applies equally to government requirements that private actors suppress such speech.Footnote 86 As such, users posting legally prohibited content would surely be able to successfully attack such laws as violating the First Amendment. But what the holding in NetChoice establishes is that even if the users themselves fail to advance such legal claims, platforms claiming editorial rights should be able to attack such legislative efforts as usurping their core editorial rights to carry whatever legal and constitutionally protected content they choose to.

However, while a flat-out prohibition on falsehoods or hate speech cannot survive constitutional scrutiny, the First Amendment does permit regulation even of protected speech, thereby restricting both speech and editorial rights, so long as the regulation serves urgent social goals and is written narrowly. Thus, a narrow prohibition on falsehoods regarding, for example, voting rules might survive judicial scrutiny if written carefully to target only clearly false, and clearly harmful assertions. Similarly, for reasons already discussed, narrowly written labeling requirements (or requirements to link to truthful information) might also be permissible in such situations, so long as carefully targeted at content that is provably harmful and false.

Furthermore, when speech is unprotected there is no question that legislation can ban such speech, overriding both speech and editorial rights. Thus, there is no First Amendment barrier to laws requiring platforms, upon being given notice, to remove false commercial speechFootnote 87 or hate speech that crosses the line into incitement of violence under the Supreme Court’s precedent in this area.Footnote 88 Nor would there be any constitutional barrier to amending Section 230 to limit or eliminate platforms’ statutory immunity for carrying third-party content that is constitutionally unprotected, as Congress in fact did in 2018 with respect to platforms that knowingly permit their services to be used to facilitate sex trafficking.Footnote 89 Questions regarding possible Section 230 reforms, and their (troubling) practical consequences, will be taken up in more detail in Chapter 6 but for now it is sufficient to note that the First Amendment is not an absolute bar to such legislative actions, so long as platform liability is limited to content the platform knows is illegal (why that is so will be taken up in Chapter 6).

Finally, consider laws that forbid platforms from deplatforming a specific class of users – as Florida did with politicians and journalists,Footnote 90 in obvious response to the deplatforming of President Trump in January of 2021. While at first blush such a law seems less troubling than direct restrictions on content moderation, they are nonetheless very problematic. For one, it is highly predictable that if a legislature imposes such a limit, it will almost always be seeking to protect speakers with specific ideological bents (as was surely true in Florida), which makes the law indistinguishable from one directly favoring specific viewpoints. In addition, such legislation has the direct and obvious effect of denying platforms one powerful remedy – temporary or permanent deplatforming – against users who regularly violate content policies. But this in itself sharply interferes with editorial freedom, by making it difficult for platforms to control and deter users who are scofflaws. As such, courts should approach such law with, at a minimum, high levels of skepticism.

4.7 Why Treating Platforms as Common Carriers is a Terrible Idea

We should end our discussion of the choice between treating social media platforms as common carriers, similar to railroads and telephone companies, or as media entities, possessing editorial rights, by taking a step away from the law and considering instead policy and practical consequences. To understand why the common carrier model for platforms is not only unconstitutional but also terrible public policy, it is useful to envision what the world would look like if platforms were treated as common carriers. Would that world be a better one than the admittedly imperfect status quo? Proponents of regulation appear to believe so (or so one must assume); but they are clearly wrong.

Let us begin first with Justice Thomas’s far-reaching proposal to fully regulate social media platforms as common carriers or places of public accommodation, on par with railroads, landline telephony, and telegraphs. At the core of such regulation is a requirement of nondiscrimination – an obligation to serve all customers without distinction and on identical terms, so long as the provider has capacity to do so (I presume that Justice Thomas did not intend to endorse other elements of common carriage status, such as price regulation).Footnote 91 As applied to social media, what this would mean is that platforms would be required to carry any and all (legal?) content posted by any person who is or seeks to be a platform user (capacity constraints not being an issue for the major platforms). What would this look like?

First, let us consider the potential caveat limiting platform hosting obligations to legal content. While they rarely address the question directly, proponents of platform regulation appear to implicitly assume that even under common carrier regulation, platforms could and would refuse to host blatantly illegal content such as child pornography or violent threats. But it is not clear why that is so. After all, when terrorists use telephone calls to plan an atrocity, or insurrectionists travel by airplane or railroad to attack the Capitol, no one holds the telephone company, airline, or railroad responsible for the resulting violence, even if they had reason to know that illegal activity was afoot. The reason is that imposing obligations to police their customers on common carriers seems completely inconsistent with their broader obligation to serve.

Why then should platforms be different? If platforms are regulated as common carriers, they will presumably dismantle the elaborate content-moderation machinery that they have created.Footnote 92 After all, content moderation is a fraught, extremely expensive, and controversial process, so if platforms’ ability to engage in such moderation is severely restricted, they will surely not bother incurring the expense. But once the content moderation machinery is dismantled, how and why would platforms suppress illegal content? Left to their own devices, one strongly suspects that they would not.

A possible response to this argument is that platforms should simply be subject to a legal obligation to block illegal content, while carrying all legal content. But this is also highly problematic. The difficulty arises because, as Eric Goldman and Jess Miers have pointed out, the line between protected and unprotected content is often very blurry.Footnote 93 When a communication crosses the line from hyperbole to a “true threat,” for example, is often unclear.Footnote 94 And even when the legal line is clear, it is often quite difficult to determine if particular content is illegal – for example, whether it is unprotected child pornography portraying a minor engaged in sexual conduct,Footnote 95 or protected “virtual child pornography” depicting a young-looking adult.Footnote 96 But under a regulatory approach combining common carriage with an obligation to block illegal content, platforms would be liable either if they fail to block illegal content or if they mistakenly blocked legal content thinking it is illegal, thereby violating their common carrier obligations. Such a legal regime is both profoundly unfair and entirely unsustainable.

Leaving aside the problem of illegal content, however, even with respect to unquestionably legal and constitutionally protected content, common carriage would have highly problematic consequences. As Goldman and Miers also point out, the world is full of content that is “lawful-but-awful,” and experience suggests that the internet is particularly likely to be used to spread such content (perhaps because of the pseudo-anonymity of being online,Footnote 97 and also because of the lack of online gatekeepers, the topic of the next chapter). Such lawful-but-awful content includes non-obscene pornography, gruesome depictions of violence (sometimes posted by the perpetrator), hate speech, bullying that does not rise to the level of harassment or threats, and of course lies galore about just about anything, including dangerous lies such as medical misinformation. Such content is legal and constitutionally protected, so a common carriage requirement would entirely eliminate social media platforms’ power to block such content. Indeed, platforms could not even de-amplify it, because common carriers are required to provide service to all users on equal and nondiscriminatory terms, on a first-come, first-served basis. In the world of social media, this means hosting and displaying all legal content without making distinctions, because for the major platforms capacity constraints are a non-issue. That seems a rather troubling outcome.

Furthermore, if platforms were forced to host and display lawful-but-awful content on equal terms with all other content, it seems highly likely that all but the worst users and advertisers will ultimately flee the platforms. Users will flee because most people will quite reasonably not want to waste their time (and harm their emotional well-being) by wading through pornography, violence, hate, and lies. And advertisers will flee because they do not want their products associated with such things – something in the advertising industry called protecting “brand safety.” This dynamic was on full display in the wake of Elon Musk’s purchase of Twitter/X, whose advertising revenues fell 59 percent in the ensuing six months, at least in part because of the rise of hate speech and pornography on the platform.Footnote 98

In the long term, such a downward spiral must lead to platforms’ demise. That seems a very bad result, not only for the platforms themselves but for the billions of users worldwide who enjoy interacting with social media. Moreover, whether or not social media is on balance socially beneficial, the precedent of the government effectively destroying a new form of communicative media through regulation seems to set a truly terrible precedent, putting aside constitutionality.

Perhaps because they recognized these problems, the Florida and Texas legislation at issue in NetChoice, while giving a nod to the notion that social media platforms operate as “common carriers,” both stopped well short of true common carriage requirements. Nonetheless, both laws will, if ever implemented, have highly troubling consequences. The problem with Florida’s law is, frankly, that it is bizarre. The special protections it provides to speech by or about politicians suggests that, in the view of the Florida legislature, elected officials are more important contributors to public discourse than the citizens who vote them into office. How such an approach can be reconciled with the basic premises of popular sovereignty that underlie our system of government is beyond understanding. Florida’s law favors elected officials over ordinary citizens in the process of setting public opinion by only protecting politicians’ posts about public policy (recall that while the Florida law protects private posts about politicians, it does not protect posts addressing public policy unless they are posted by politicians), and by only protecting politicians from deplatforming. But if James Madison was correct in asserting that “[p]ublic opinion sets bounds to every government, and is the real sovereign in every free one,”Footnote 99 then this has it upside-down.

Finally, let us consider Texas’s requirement of viewpoint neutrality in content moderation. On its face, this seems a narrower and more reasonable restriction than full common carriage or Florida’s self-serving pro-politician gerrymander, since it would presumably still permit platforms to block some forms of lawful-but-awful content, such as nudity or personal abuse, on a viewpoint-neutral basis. But viewpoint neutrality nonetheless prohibits a great deal of desirable content moderation. For example, outright pro-Nazi or White Supremacist speech is fully shielded by the Texas law as protected viewpoints. The same is true of speech encouraging gender-based violence or self-harm. And the same is true of speech praising and supporting ISIS, and encouraging emulation of terrorist violence.Footnote 100 It is ironic in this regard that Twitter/X, which in its early years sought to avoid content moderation, changed its approach precisely because it had become an important venue for ISIS propaganda and recruitment.Footnote 101 In the name of protecting conservative viewpoints, Texas would force Twitter/X (and Facebook and YouTube and all other platforms) back to that time.

For similar reasons, platform efforts to block hate speech directed at racial or sexual minorities or at women would also be illegal under the Texas statute. The Supreme Court has clearly held that hate speech is a protected viewpoint.Footnote 102 As a result, a hate-speech ban on social media would directly violate HB 20’s core requirements of viewpoint neutrality.Footnote 103 To give just one example of the consequences of this, under Texas’s HB 20, Facebook would be required to reverse its decision from October of 2020 to ban Holocaust denial.Footnote 104 Indeed, because HB 20 prohibits censorship based on the viewpoint of the user as well as of content, it would also appear to prohibit platforms from banning white supremacist groups such as the Ku Klux Klan from their platform. Not only are these outcomes highly problematic from a public policy perspective, in the long run they will also, as noted earlier, threaten the very existence of social media platforms as users and advertisers flee such a toxic environment.

In short, there are very good reasons, both ethical and business-related, why almost all successful social media platforms moderate content, often extensively. Eliminating that ability, as Justice Thomas’s common carrier proposal would do, would have utterly unacceptable social consequences. Furthermore, even Texas’s more modest requirement of viewpoint-neutral content moderation would also end up enabling a great deal of speech, such as terrorist propaganda and white supremacist speech, that most reasonable people do not want to be exposed to. Which is to say that these proposals are not just unconstitutional, they are a terrible idea.

Footnotes

1 144 S. Ct. 2383 (2024).

2 Footnote Ibid. at 2399–2403.

3 Footnote Ibid. at 2403–06.

4 Footnote Ibid. at 2407–08.

5 Footnote Ibid. at 2430–33 (Alito, J., concurring in the judgment).

6 The issue concerned the fact that NetChoice and the other plaintiffs in these cases chose to bring a “facial” rather than an “as-applied” challenge to the Florida and Texas laws, and had to do with how to analyze such “facial” challenges, a topic thankfully far outside the scope of this book.

7 NetChoice, 144 S. Ct. at 2438–39 (Alito, J., concurring in the judgment).

8 Biden v. Knight First Amend. Inst. at Columbia Univ., 141 S. Ct. 1220, 1222–23 (2021) (Thomas, J., concurring) (citing Adam Candeub, Bargaining for Free Speech: Common Carriage, Network Neutrality, and Section 230, 22 Yale J.L. & Tech. 391, 398–403 (2020)); James B. Speta, A Common Carrier Approach to Internet Interconnection, 54 Fed. Comm. L.J. 225, 255 (2002).

9 Speta, supra n. 8, at 253–54 (citing Bruce Wyman, The Law of Public Callings as a Solution of the Trust Problem, 17 Harv. L. Rev. 156 (1904)).

10 Angela J. Campbell, Publish or Carriage: Approaches to Analyzing the First Amendment Rights of Telephone Companies, 70 N.C. L. Rev. 1071, 1120 (1992).

11 Footnote Ibid. at 1121–22.

12 FCC v. Midwest Video Corp., 440 U.S. 689 (1979).

13 Columbia Broad. System, Inc. v. Democratic Nat’l Comm., 412 U.S. 94 (1973).

14 47 U.S.C. § 153(h).

15 Biden v. Knight First Amend. Inst. at Columbia Univ., 141 S. Ct. 1220, 1222–23 (2021) (Thomas, J., concurring).

16 Footnote Ibid. at 1224–25.

17 Christopher Yoo, The First Amendment, Common Carriers, and Public Accommodations: Net Neutrality, Digital Platforms, and Privacy, 1 J. Free Speech L. 463, 466–68 (2021).

18 Footnote Ibid. at 468 (citing Nebbia v. New York, 291 U.S. 502, 536 (1934)).

19 Footnote Ibid. at 469–72.

20 FCC v. Midwest Video Corp., 440 U.S. 689 (1979).

21 Yoo, supra n. 17, at 473–75.

22 National Ass’n of Regul. Util. Comm’rs v. FCC (NARUC I), 525 F.2d 630, 641 (D.C. Cir. 1976).

23 Footnote Ibid. (quoting Semon v. Royal Indemnity Co., 279 F.2d 737, 739 (5th Cir. 1960)).

24 National Ass’n of Regul. Util. Comm’rs v. FCC (NARUC II), 533 F.2d 601, 609 (D.C. Cir. 1976).

25 See Yoo, supra n. 17, at 475.

26 Shannon Bond, Facebook Shrugs Off Fears It’s Losing Users, NPR (Apr. 28, 2022), www.npr.org/2022/04/28/1095147942/facebook-shrugs-off-fears-its-losing-users.

27 Biden v. Knight First Amend. Inst. at Columbia Univ., 141 S. Ct. 1220, 1224 (2021) (Thomas, J., concurring).

28 Bond, supra n. 26.

29 See United States v. Terminal R.R. Ass’n of St. Louis, 224 U.S. 383 (1912).

30 Twitter Daily User Growth Rises as Musk Readies to Take Control, Al Jazeera (April 28, 2022), www.aljazeera.com/economy/2022/4/28/twitter-daily-user-growth-rises-as-musk-readies-to-take-control; David Ingram, Fewer People Are Using Elon Musk’s X as the Platform Struggles to Attract and Keep Users, According to Analysts, NBC News (March 22, 2024), www.nbcnews.com/tech/tech-news/fewer-people-using-elon-musks-x-struggles-keep-users-rcna144115.

31 Knight, 141 S. Ct. at 1221 (Thomas, J., concurring).

32 If one considers social media at a global scale, one must add to that list platforms such as Telegram, which skirt the line between social media and messaging but enjoy huge user bases (in Telegram’s case, larger than Twitter/X’s).

33 Knight, 141 S. Ct. at 1222–23 (Thomas, J., concurring).

34 Footnote Ibid. at 1223 (Thomas, J., concurring).

35 Securities & Exchange Comm’n v. Chenery Corp., 332 U.S. 194, 214 (1947) (Jackson, J., dissenting).

36 Adam Candeub, Bargaining for Free Speech: Common Carriage, Network Neutrality, and Section 230, 22 Yale J.L. & Tech. 391, 402–03 (2020).

37 Footnote Ibid. at 405–06.

38 Footnote Ibid. at 418–22.

39 Jeff Kosseff, The Twenty-Six Words That Created the Internet (2019).

40 In an article published just as this book was being completed, Professors Ganesh Sitaram and Morgan Ricks of Vanderbilt University argue that internet platforms do qualify as common carriers under the common law. Ganesh Sitaram and Morgan Ricks, Tech Platforms and the Common Law of Carriers, 73 Duke L.J. 1037 (2024). A closer look at their argument (which is not particularly focused on social media) demonstrates, however, that the form of common carriage they support would permit many of the platform behaviors that Justice Thomas, Florida, and Texas seek to prevent. Footnote Ibid. at 1088–98.

41 NetChoice, 144 S. Ct. at 2396 (citing NetChoice v. Moody, 34 F.4th 1196, 1209 (11th Cir. 2022)).

42 Footnote Ibid. (citing NetChoice v. Paxton, 49 F.4th 439, 466, 494 (5th Cir. 2022)).

43 Footnote Ibid. at 2399.

44 418 U.S. 241 (1974).

45 475 U.S. 1 (1986).

46 512 U.S. 622 (1994).

47 515 U.S. 557 (1995).

48 NetChoice, 144 S. Ct. at 2400–01.

49 Footnote Ibid. at 2401–03.

50 Footnote Ibid. at 2411–12 (Jackson, J., concurring in part and concurring in the judgment).

51 Footnote Ibid. at 2403–04.

52 Footnote Ibid. at 2405.

53 Footnote Ibid. at 2406 (quoting Hurley, 515 U.S. at 569, 574).

55 Footnote Ibid. at 2407–08.

56 Footnote Ibid. at 2412 (Thomas, J., concurring in the judgment).

57 Footnote Ibid. at 2438 (Alito, J., concurring in the judgment).

58 Eric Goldman made this point succinctly in a brief essay. See Eric Goldman, Of Course the First Amendment Protects Google and Facebook (and It’s Not a Close Question), Knight First Amendment Inst. at Columbia Univ. (Feb. 26, 2018), https://perma.cc/UU8L-R72T. For a thorough description of the process, see Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv. L. Rev. 1598 (2018).

59 Tarleton Gillespie, Custodians of the Internet 13 (2018).

60 For a fuller development of this argument, tying it to the implied First Amendment right of association, see Ashutosh Bhagwat, Our Democratic First Amendment 5657 (2020).

61 See Robert C. Post, The Constitutional Concept of Public Discourse: Outrageous Opinion, Democratic Deliberation, and Hustler Magazine v. Falwell, 103 Harv. L. Rev. 601, 684 (1990).

62 Cf. John D. Inazu, Virtual Assembly, 98 Cornell L. Rev. 1093, 1141–42 (2013).

63 Such deplatforming decisions are not uncommon. See, e.g., Joshua Partlow, Facebook’s Decision to Shut Down Militia Pages Prompts Backlash among Some Targets, Washington Post (Aug. 21, 2020).

64 See Brown v. Entertainment Merchants Ass’n, 564 U.S. 786, 790 (2011); United States v. Stevens, 559 U.S. 460, 479–80 (2010); Winters v. New York, 333 U.S. 507, 510 (1948).

65 Radiolab: Post No Evil, WNYC Studios (Aug. 17, 2018), https://perma.cc/B8SQ-27VM.

66 Aarti Shahani, With “Napalm Girl,” Facebook Humans (Not Algorithms) Struggle to Be Editor, NPR (Sept. 10, 2016, 11: 12 PM), https://perma.cc/HE6Q-N7WB.

67 See Farhad Manjoo, Twitter, It’s Time to End Your Anything-Goes Paradise, N.Y. Times (Nov. 22, 2017); see also Lindy West, This American Life: Ask Not for Whom the Bell Trolls; It Trolls for Thee, Chi. Pub. Radio (Jan. 23, 2015), https://perma.cc/5VUC-8KJW. Lindy West’s segment on the harms of trolls led to Twitter/X’s then CEO admitting the platform’s failures to address harassment, Caitlin Dewey, Twitter CEO Dick Costolo Finally Admits the Obvious: Site Has Failed Users on Abuse, Washington Post (Feb. 5, 2015).

68 The Twitter Rules, Twitter: Rules and Policies, https://perma.cc/GNC7-7Q3R (last visited June 25, 2021).

69 David Klepper, Twitter Ends Enforcement of COVID Misinformation Policy, AP (Nov. 29, 2022), https://apnews.com/article/twitter-ends-covid-misinformation-policy-cc232c9ce0f193c505bbc63bf57ecad6.

70 See Benjamin Franklin, Apology for Printers, Pennsylvania Gazette (June 10, 1731), https://perma.cc/83V7-X8NP.

71 Whether or not they do so turns entirely on the definition of “ideological.” If by that one means that platforms favor “liberal” over “conservative” content, there appears to be no evidence that they do. But if a ban on hate speech can be considered ideological, then the major platforms clearly engage in such behavior.

72 NetChoice, 144 S. Ct. at 2413 (Thomas, J., concurring in the judgment); Footnote ibid. at 2438 (Alito, J., concurring in the judgment).

73 Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc., 515 U.S. 557, 561 (1995).

74 Boy Scouts of Am. v. Dale, 530 U.S. 640 (2000).

75 Footnote Ibid. at 659.

76 Footnote ibid. at 656–57.

77 W. Va. Bd. of Educ. v. Barnette, 319 U.S. 624 (1943).

78 Admittedly, the Court has at times insisted upon “[t]he constitutional equivalence of compelled speech and compelled silence.” Riley v. National Fed’n of the Blind of N.C., Inc., 487 U.S. 781, 797 (1988). Given the ubiquity of disclosure obligations in the commercial and campaign finance contexts, however, these assertions cannot be taken entirely seriously.

79 New York Times v. Sullivan, 376 U.S. 254, 275 (1964) (quoting 4 Annals of Congress 934 (1794)).

80 See Eric Goldman, The Constitutionality of Mandating Editorial Transparency, 73 Hastings L.J. 1203 (2022) (arguing that transparency requirements regarding platform content moderation policies impose substantial First Amendment burdens, and so are presumptively unconstitutional).

81 Cf. Am. Beverage Ass’n v. City & Cnty. of San Francisco, 916 F.3d 749, 757 (9th Cir. 2019) (en banc) (invalidating a warning requirement on advertisements of sugar-sweetened beverages because the size of the warning drowned out the advertisers’ speech).

82 I say mainly because UC Berkeley School of Law Dean Chemerinsky, who is famously progressive, once made a similar proposal. Prasad Krishnamurthy and Erwin Chemerinsky, How Congress Can Prevent Big Tech from Becoming the Speech Police, The Hill (Feb. 18, 2021, 8:00 AM), https://perma.cc/G8EJ-3XCM.

83 See, e.g., Davey Alba, Facebook Must Better Police Online Hate, State Attorneys General Say, N.Y. Times (Aug. 5, 2020), www.nytimes.com/2020/08/05/technology/facebook-online-hate.html.

84 Matal v. Tam, 137 S. Ct. 1744, 1764 (2017) (plurality opinion).

85 United States v. Alvarez, 567 U.S. 709, 727–28 (2012).

86 United States v. Playboy Ent. Grp., 529 U.S. 803, 826–27 (2000).

87 See, e.g., Va. State Bd. of Pharm. v. Va. Citizens Consumer Council, Inc., 425 U.S. 748, 770–71 (1976).

88 Brandenburg v. Ohio, 395 U.S. 444, 447 (1969).

89 The relevant law is commonly known as FOSTA-SESTA. For a discussion of the legislation’s terms and background, see Charles Matula, Any Safe Harbor in a Storm: SESTA-FOSTA and the Future of § 230 of the Communications Decency Act, 18 Duke L. & Tech. Rev. 353 (2020). For a critique of the law in action, see David McCabe and Kate Conger, Stamping Out Online Sex Trafficking May Have Pushed It Underground, N.Y. Times (Dec. 17, 2019).

90 S.B. 7072, 2021 Leg. (Fla. 2021).

91 Biden v. Knight First Amend. Inst. at Columbia Univ., 141 S. Ct. 1220, 1222 (2021) (Thomas, J., concurring).

92 See Klonick, supra n. 58, at 1625–30 (2018).

93 Eric Goldman and Jess Miers, Online Account Terminations/Content Removals and the Benefits of Internet Services Enforcing Their House Rules, 1 J. Free Speech L. 191, 204–07 (2021).

94 See, e.g., Elonis v. United States, 575 U.S. 723 (2015); Counterman v. Colorado, 600 U.S. 66 (2023).

95 New York v. Ferber, 458 U.S. 747 (1982).

96 Ashcroft v. Free Speech Coal., 535 U.S. 234 (2002).

97 Goldman and Miers, supra n. 93, at 208–09.

98 Ryan Mac and Tiffany Hsu, Twitter’s U.S. Ad Sales Plunge 59% as Woes Continue, N.Y. Times (June 5, 2023), www.nytimes.com/2023/06/05/technology/twitter-ad-sales-musk.html.

99 James Madison, Public Opinion, Nat’l Gazette (Dec. 19, 1791), https://perma.cc/T92L-ZXM6.

100 NetChoice, 144 S. Ct. at 2405 (listing awful viewpoints protected by Texas law).

101 Julia Greenberg, Why Facebook and Twitter Can’t Just Wipe Out ISIS Online, Wired (Nov. 21, 2015), https://perma.cc/T263-YKUW.

102 Matal v. Tam, 137 S. Ct. 1744, 1766–67 (2017) (Kennedy, J., concurring in part and concurring in the judgment); R.A.V. v. City of St. Paul, 505 U.S. 377, 391–92 (1992).

103 Tex. Civ. Prac. & Rem. Code § 143A.002(a)(1)–(3).

104 Monika Bickert, VP of Content Policy, Removing Holocaust Denial Content, Facebook (Oct. 12, 2020), https://perma.cc/X5QK-T3SC.

Accessibility standard: Unknown

Accessibility compliance for the HTML of this book is currently unknown and may be updated in the future.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge-org.demo.remotlog.com is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×