Anthropic officially sues the Trump administration to oppose being blacklisted by the Pentagon

robot
Abstract generation in progress

Artificial intelligence startup Anthropic filed a lawsuit against the Trump administration on Monday, after being blacklisted and labeled as a threat to U.S. national security.

In the complaint submitted to the U.S. District Court in California, Anthropic stated that these actions are “unprecedented and illegal” and are “causing irreparable harm to Anthropic.”

The lawsuit states that contracts between Anthropic and the federal government have been canceled. Current and future contracts with private entities are also uncertain, potentially jeopardizing hundreds of millions of dollars in revenue in the short term.

Beyond these direct financial losses, Anthropic’s reputation and its core First Amendment rights are also under attack. If the court does not provide legal relief, these damages could further escalate in the coming weeks and months.

This lawsuit is the latest development in the two-week-long intense conflict between Anthropic and the Trump administration. The dispute mainly revolves around how the company’s AI models are used on the battlefield and in other scenarios.

Before this controversy became public at the end of last month, Anthropic had been an important early partner of the U.S. government.

Anthropic’s AI model Claude has been deeply integrated into the Department of Defense over the past year. Until recently, Claude was the only AI model approved for use in classified systems. Reports indicate that the Department of Defense extensively used this technology in military operations, including targeting missile strikes during the Iran conflict.

Last Thursday, Anthropic confirmed that the company had been officially designated as a “supply chain risk.” This rare measure has historically been applied mainly to foreign adversary companies.

According to this designation, U.S. defense contractors and suppliers working with the Pentagon must prove that their systems do not use Anthropic’s AI models.

Last month, Trump also posted on social media calling for federal agencies to “immediately stop” using Anthropic’s technology. “We will decide the fate of this country, not some out-of-control radical left AI company. Those people have no idea what the real world is like.”

Anthropic has requested the court to revoke the supply chain risk designation and to issue a temporary restraining order during the legal proceedings to prevent the enforcement of this action.

In July 2025, Anthropic signed a $200 million contract with the Department of Defense and became the first AI lab to deploy AI technology on the Pentagon’s classified networks.

However, the two sides later reached an impasse in contract renewal negotiations, mainly over the scope of AI model usage.

The Department of Defense believes that the military must be able to use the relevant technology “for all legitimate purposes” without restrictions imposed by vendors on critical capabilities. If vendors attempt to limit legitimate uses to influence military command systems, it could jeopardize the safety of personnel.

Therefore, the Department of Defense wants Anthropic to grant it unrestricted access to its AI models for all legitimate uses, while Anthropic seeks assurances that its models will not be used in fully autonomous weapons systems or for large-scale domestic surveillance.

An Anthropic spokesperson stated on Monday: “Seeking judicial review does not change our long-standing commitment to using AI to maintain national security, but it is a necessary step to protect our business, clients, and partners. We will continue to seek solutions through all channels, including dialogue with the government.”

(Source: Cailian Press)

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments