Can the government tell social media platforms how to moderate their content? The Supreme Court is about to decide

 

Can the government tell social media platforms how to moderate their content? The Supreme Court is about to decide

As the presidential election nears, ‘Murthy v. Missouri’ will have major implications for online discourse.

BY Issie Lapowsky

Government officials have been talking tough to Big Tech for years now. 

President Biden has publicly accused Facebook of “killing people” with vaccine misinformation, while members of his administration have privately pushed the platforms to remove objectionable posts “ASAP.” During the Trump era, the president repeatedly skewered tech giants for “shadowbanning” conservatives and even threatened to sue tech companies for bias—a promise he followed through on after leaving office.

Legal scholars have a name for this kind of government-issued goading—which often comes in lieu of actual legislation: “jawboning.” But when does jawboning cross the line from mere persuasion to unconstitutional efforts by the government to control free speech? 

On Monday, the Supreme Court will hear arguments in a case that will answer that question head on. The case, Murthy v. Missouri, stems from a lawsuit filed by the attorneys general of Missouri and Louisiana, as well as a group of social media users—including a mix of doctors and such right-wing media personalities as the Gateway Pundit’s Jim Hoft. The plaintiffs argued that officials in the Biden administration censored their speech related to COVID-19 vaccines and mask mandates, the 2020 election, and a range of other issues, by pressuring tech platforms to remove their posts. 

The court’s decision will “have broad implications for public discourse online,” says Jennifer Jones, staff attorney with Knight First Amendment Institute at Columbia University. “It will determine where the line between persuasion and coercion should be drawn.”

Last year, a district court in Louisiana sided with the plaintiffs in the underlying case and barred any of the government agencies named in the suit, including the Department of Health and Human Services (HHS) and the Federal Bureau of Investigations (FBI), from working with social media companies on issues related to protected speech. The court also blocked government officials from working with certain leading misinformation researchers at Stanford University and the University of Washington, whom conservatives allege are an extension of the government’s censorship regime due to their close collaboration with government agencies in detecting misinformation related to the 2020 election. The move scrambled efforts by government officials, tech companies, and researchers to collaborate on much-needed research and platform protections ahead of November’s election. 

While a Fifth Circuit appeals court later rolled back some of the lower court’s ruling, much of it still remained, leading to widespread uncertainty about how both federal agencies and local election officials could proceed. The Biden administration ultimately asked the Supreme Court to take up the case and decide on the issue once and for all.

A high-stakes dispute over jawboning

The question at the heart of the case is not whether the Biden administration infringed on tech companies’ own First Amendment rights to moderate content—though another set of cases this term will grapple with the speech rights of platforms. Rather, the question is whether the government effectively turned social media companies into pro-censorship proxies by directing those companies to remove certain content—and threatening serious consequences if they didn’t. 

The Biden administration argues in its petition to the court that government officials need to be free to “inform, persuade, and to criticize,” and that in doing so, government agencies are merely providing information to private companies. The fact that companies sometimes respond in agreement doesn’t make them de facto arms of the state, the administration argues. “Were it otherwise, every successful public-awareness campaign or use of the bully pulpit would create state action,” the Biden administration wrote. 

On the other side of the case, attorneys for Missouri, Louisiana, and the social media users argued that the Biden administration’s argument “flips the First Amendment on its head,” and said that while the government does have a right to speak freely, “it cannot pressure and coerce private companies to censor ordinary Americans.” To underscore the point, the states’ brief points to private emails from Biden administration staffers, in which they made pointed requests for content and accounts to be removed and not so subtly hinted that there would be adverse repercussions for tech companies that failed to comply. In one such email, a White House official who was apparently unhappy with Facebook’s response to COVID misinformation, told the company that the White House was “considering our options on what to do about it.”

Can the government tell social media platforms how to moderate their content? The Supreme Court is about to decide

If the government can circumvent the First Amendment with “thinly veiled threats,” the states argue, “It would make the First Amendment, the most fundamental and most fragile liberty, the easiest of rights to violate.”

While the tech industry hasn’t picked sides in the case, it has a lot riding on the outcome. For starters, not all communications between tech companies and government officials involve jawboning. Companies often voluntarily rely on leads from government sources and external researchers to identify emerging risks and elevate trustworthy information. Particularly during election cycles, local election officials play a key role in keeping companies up to date about the outcome of races, registration deadlines, polling locations, and more. At the very least, this case could complicate those efforts and make it that much harder for people to find factual information about voting this November.

But the stakes of the case are even more existential for the industry. Jawboning has always been a pain for tech companies who frequently feel compelled to make content decisions they wouldn’t otherwise have made. And yet, they’ve at least been free to make those decisions. Only the government can violate people’s First Amendment rights, after all. But if the Supreme Court determines that platforms’ content decisions can sometimes constitute state action, as Missouri and Louisiana are alleging, it would open these private companies up to an unsustainable amount of liability. 

As a range of tech industry groups wrote in a brief to the court, that kind of logic would mean tech platforms “get hit coming and going”—jawboned by government agencies, then sued for compliance. “[S]uch a rule would diminish focus on government officials whose conduct may have violated the First Amendment, which is where the focus belongs,” the brief reads. The industry is asking the court to tread carefully in its decision—not letting the government off the hook, but not putting tech companies on the hook for the government’s actions either.

Persuasion versus coercion

This is not the first time the Supreme Court has taken up a case on jawboning, but it is the first time in the internet age. The last case, 1963’s Bantam Books v. Sullivan, revolved around a government commission that pressured private book distributors to remove books deemed obscene from circulation. Among other intimidation tactics, the commission sent police officers to the distributors’ locations to enforce compliance. After the distributors sued, the Supreme Court agreed that the commission had crossed the line from mere persuasion into unconstitutional coercion. 

Drawing a distinction between persuasion and coercion will be key to the Murthy case, says Jones of Knight First Amendment Institute, which has not backed either party in the case. And yet, despite some recent lower court rulings, the Supreme Court hasn’t weighed in on what exactly constitutes coercion, leaving a vast gray area open for interpretation. “This is an area of First Amendment doctrine that is in really dire need of clarity,” Jones says. 

Arguments in the Murthy case come just weeks after the court heard arguments in another set of cases about online speech. Those cases, Netchoice v. Paxton and Moody v. Netchoice, concern the constitutionality of laws in Florida and Texas that require tech platforms to carry certain political speech. In some ways, the Netchoice cases are the inverse of Murthy, and they’ve flipped political alliances accordingly. Whereas, in Murthy, conservatives are accusing the government of censoring speech by way of tech platforms, in the Netchoice cases, conservatives are defending actual laws that dictate what speech social media platforms publish.

The fact is, while the states in the Murthy case have tried to frame the government’s efforts to influence online speech as an anti-conservative attack, both parties are guilty of jawboning. Just as Democrats have threatened and cajoled tech platforms into removing hate speech and misinformation, Republicans have pressured Facebook to remove pages inciting violence against GOP lawmakers and delete fact-checks from anti-abortion posts. Conservatives’ constant complaints about alleged tech censorship are, in and of themselves, a form of jawboning: attempting to compel private companies to publish speech they would otherwise suppress. 

In an environment where just about every decision tech platforms make becomes highly politicized, lawmakers on both sides of the aisle have grown accustomed to making pointed—if, often empty—threats at Big Tech. Now, the Supreme Court will decide just how far those threats can go.

 

ABOUT THE AUTHOR

Issie Lapowsky is a journalist covering the intersection of tech, politics, and national affairs. More


Fast Company – technology

(6)