The next AI fight: Do the chatbots have First Amendment rights?

The fight between Anthropic and the Pentagon looks at first like a fight about AI safety — a principled tech company drawing ethical lines in the sand. It is at least partly that. But it’s also a First Amendment case.

It’s a test of whether the executive branch can summarily execute its vendors for “noncompliance.” It’s an investor risk story for everyone who put hundreds of billions into AI companies on the assumption that the U.S. government would be a customer, not a corporate murderer. And it’s a dress rehearsal for every painful question that humanity hasn’t figured out how to answer about the most powerful information technology it has ever built.

What’s the legal status of AI? Who’s in charge of it? When — not if — something goes wrong, who’s responsible?

In other words, this fight is even bigger than it looks. And it’s even stranger than it seems.

The buildup, the breakup, the lawsuit

The conflict began when Anthropic refused to strip two safety guardrails from the specialized version of its Claude AI system that it provides to the Pentagon, under a deal worth some $200 million: protections against warrantless mass domestic surveillance of Americans, and against deployment in fully autonomous weapons systems. Late last month, CEO Dario Amodei detailed the Pentagon’s response: a threat to designate Anthropic a “supply chain risk," a label that has previously been reserved for foreign adversaries like Chinese telecom firms — never an American company.

The Pentagon followed through in early March, effectively blacklisting Anthropic from government contracts. Anthropic sued, warning the designation could cost it billions. A hearing on whether to grant Anthropic temporary relief is scheduled for Tuesday.

A more specific triggering incident has since been widely reported: After the January raid that captured the Venezuelan leader Nicolás Maduro, an Anthropic executive contacted Palantir $PLTR — through which Claude was integrated into Pentagon systems — asking how its AI had been used. Palantir flagged the inquiry to Pentagon officials, who read it as disapproval of a classified operation, kicking off the failed negotiations that preceded the rift. Pentagon CTO Emil Michael confirmed many of the details to The Wall Street Journal. “There is no chance,” he said. “There’s no partnership that can be had.”

What Michael didn’t say publicly was revealed in a court filing last Friday: Michael emailed Amodei on March 4 — the day after the Pentagon finalized the supply-chain designation — to say the two sides were “very close” on the exact two issues the government now cites as evidence that Anthropic poses a national security threat. The email has now become evidence, suggesting if not proving that the supply-chain threat designation was a bargaining chip, rather than a straightforward flagging of risk. If the two sides were “very close” even as the designation was being finalized, how much of a security risk could the Pentagon actually regard Anthropic to be?

AI meets 1A

The case is moving fast, and it arrives at what many are experiencing as a particularly uncanny moment of American history, under an administration so unsubtle about its intentions that the constitutional fight broadly anticipated by legal experts is happening even earlier than those same experts expected. Namely, one of Anthropic’s suits alleges a First Amendment violation, its argument being that forcing a company to build tools it finds ethically unconscionable is compelled speech.

The core of that argument rests on what kind of machine an AI model actually is, said Matthew Seligman, founder of Grayhawk Law, who’s taught at Harvard Law, and was a fellow at Stanford Law School’s Constitutional Law Center.

“What Anthropic is arguing, in essence, is that they are different from a traditional defense contractor because what they are offering the government is a speech machine—one whose outputs are information, not explosions,” Seligman told Quartz. “The big question is whether Anthropic’s technology is more appropriately analogized, for First Amendment purposes, to Lockheed Martin $LMT — or to a defense analyst. And I think that really highlights the fact that these AI models don’t fit comfortably into either of the traditional legal categories.

“The law is going to have to develop an understanding of how to analyze them,” he added, “as has happened many times over the centuries when the law has had to adapt to a new technology that made its old categories obsolete.”

‘A very unsettling place to be’

If the government wins, the implications extend well beyond this case.

“If you give the government a license to kill companies, then companies are always going to be under threat of execution, and therefore they will always feel like they need to do what the government says,” Seligman said.

The worry is about that kind of power, and this administration’s use of that kind of power. “If the [Department of Defense] walks up to a company and says, ‘We want to use your technology, and if you don’t let us, we’re going to kill your company’ — that’s a very unsettling place to be.”

The implications for investors are just as serious. “If you’re an investor, and you know that any one of your portfolio companies could be killed at any time if they don’t go along with whatever request the Department of Defense makes of them, that introduces a huge amount of risk,” Seligman said — particularly if you believe a current or future administration won’t use that power with restraint.

The even bigger legal fight taking shape

The Anthropic case is playing out against a broader and little-understood legal backdrop. Legal scholars and commentators are warning of a “growing consensus” in the field that “generative AI outputs are protected speech.” Others have issued still broader warnings that AI could soon accumulate constitutional and other legal rights, such as owning property and financial assets.

This context is poorly understood because it is genuinely dense and incredibly detailed, a web of First Amendment jurisprudence, interpretation, precedent, and case law. But experts point to a common theme: the potential for constitutional protections to emerge that shield the AI industry from regulation.

Stephenie Brown, a lawyer who teaches business law and AI at Virginia Commonwealth University, put it plainly, telling Quartz that First Amendment protection is “the gold standard” for avoiding regulation.

“It doesn’t just shield against one lawsuit,” she said. It limits large categories of potential regulation wholesale. State-level oversight becomes constitutionally fraught. Federal rules must clear a much higher bar. The protection, if it’s granted, may be vast.

While it may be dystopian to consider that AI could be granted some constitutional rights, it’s also less of a reach than it might seem. In an interview, Mary Ann Franks, a professor in intellectual property, technology, and civil rights law at the George Washington University, traced the corporate capture of First Amendment doctrine, outlining how interpretation has been expanded to include some rights for corporations.

This is, in part, the result of a strategic pivot on the part of Republicans, Franks said. “In the 80s and 90s, Republicans were like, ‘Wait a minute, this is actually great for us — because there’s a version of free speech that isn’t about letting the dirty hippies talk. It’s about letting tobacco companies talk.’”

The First Amendment, once associated with labor organizers and civil rights activists, became a deregulatory instrument. Tech companies, Franks said, are simply the latest beneficiaries. “It’s a really sexy, catchy thing to do when you can disguise your profit motive or your selfishness as, ‘Oh, no, we’re respecting a very transcendental principle of freedom of expression.’ It really works on people.”

The result, she argues, is a First Amendment now flipped, inverted in crucial ways. “If the First Amendment was supposed to protect the people from the government, all it is doing right now is protecting the government from the people,” Franks said.

The state of AI regulation

All of this is unfolding against a regulatory landscape that is less nascent than stillborn.

The Trump administration has made its position clear, arguing that AI development should move fast, and that the federal government — not state legislatures, not courts, and certainly not safety-minded contractors — should set the terms.

In December, Trump issued an executive order directing Attorney General Pam Bondi to establish a task force to legally challenge state AI laws deemed too restrictive, and instructing the Commerce Department to withhold federal funds from states that don’t comply. The order explicitly moves away from the Biden administration’s focus on safety and equity, instead prioritizing rapid development. Virginia, which stands to lose almost $1.5 billion in broadband funding, is already drafting its AI legislation with one eye on Washington. But it’s just one state among many weighing the loss of federal funds.

The irony? While the Trump administration is aggressively blocking formal AI regulation, it is simultaneously demonstrating, through the Anthropic case, exactly why such regulation is necessary. What Trump is pursuing isn’t a hands-off approach to AI — it’s control of AI at the executive level, unchecked by either Congress or the courts.

Seligman, the former Harvard law lecturer, described the Anthropic fight as an extension of the larger executive power grab. “This is by far the most aggressive administration in certainly recent and probably all of American history with respect to executive power,” he said. "The fact that there are more challenges to executive action is a reflection of the fact that there’s been more aggressive executive action.”

The companies that have sued, he noted, have largely prevailed.

Meanwhile, even inside the AI industry, the absence of a coherent regulatory framework is widely recognized as a problem. Nick Tiger, associate general counsel at the AI company Pearl, argued that regulation is needed, and that the infrastructure for it already —  it just hasn’t yet been marshalled.

“You’ve got organizations already in place that can do this,” he said. “The Consumer Financial Protection Bureau could get involved and give guidance and rules about what is a misleading AI customer experience, when you have to disclose. The FTC regulates all these commercial transactions.” Tiger stopped short of calling for a new agency, though. “I don’t necessarily think we need to create a Department of AI or something like that,” he said.

The major obstacle, Tiger argued, is a knowledge gap on both sides of the table. “There are regulators who don’t understand the advances in technology, and there’s a lot of misinformation when they’ve got constituents in their ear telling them things that aren’t true. But then on the other side, AI engineers don’t understand all of the public policy nuances.” The result, he predicted, will necessarily be some kind of draw. “It’s just going to be a push and pull until we eventually land somewhere in the middle.”

Right now, procurement is playing the role one might expect regulators to play, if not acting as actual regulators might act. The Defense Department remains the federal government’s largest technology buyer, and its contract requirements effectively become industry standards — spreading well beyond military systems into the broader commercial market. Which is why, following the “supply chain risk” designation, at least 100 of Anthropic’s customers, from pharma to fintech, have already moved to pause or cancel their contracts.

What happens next

The outcome of Tuesday’s hearing will determine whether the Pentagon’s novel use of the supply chain designation survives its first legal test. But Seligman cautioned against reading too much into any single case, including the Anthropic one. “The public is quick to infer that the entire big-picture question is going to be resolved by a single case,” he said. “And that is almost never true.”

In other words, the Anthropic-Pentagon dispute, as consequential as it is, will not clear up all or even many of the First Amendment questions that AI raises, including the most existential ones.

“There’s an understandable urge to have these big-picture questions resolved early and definitively by a single court case,” he said. “But that’s not going to happen.”

📬 Sign up for the Daily Brief

Our free, fast and fun briefing on the global economy, delivered every weekday morning.

Sign me up

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin