Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Anthropic officially sues the Trump administration to oppose being blacklisted by the Pentagon
Artificial intelligence startup Anthropic filed a lawsuit against the Trump administration on Monday, after being blacklisted and labeled as a threat to U.S. national security.
In the complaint submitted to the U.S. District Court in California, Anthropic stated that these actions are “unprecedented and illegal” and are “causing irreparable harm to Anthropic.”
The lawsuit states that contracts between Anthropic and the federal government have been canceled. Current and future contracts with private entities are also uncertain, potentially jeopardizing hundreds of millions of dollars in revenue in the short term.
Beyond these direct financial losses, Anthropic’s reputation and its core First Amendment rights are also under attack. If the court does not provide legal relief, these damages could further escalate in the coming weeks and months.
This lawsuit is the latest development in the two-week-long intense conflict between Anthropic and the Trump administration. The dispute mainly revolves around how the company’s AI models are used on the battlefield and in other scenarios.
Before this controversy became public at the end of last month, Anthropic had been an important early partner of the U.S. government.
Anthropic’s AI model Claude has been deeply integrated into the Department of Defense over the past year. Until recently, Claude was the only AI model approved for use in classified systems. Reports indicate that the Department of Defense extensively used this technology in military operations, including targeting missile strikes during the Iran conflict.
Last Thursday, Anthropic confirmed that the company had been officially designated as a “supply chain risk.” This rare measure has historically been applied mainly to foreign adversary companies.
According to this designation, U.S. defense contractors and suppliers working with the Pentagon must prove that their systems do not use Anthropic’s AI models.
Last month, Trump also posted on social media calling for federal agencies to “immediately stop” using Anthropic’s technology. “We will decide the fate of this country, not some out-of-control radical left AI company. Those people have no idea what the real world is like.”
Anthropic has requested the court to revoke the supply chain risk designation and to issue a temporary restraining order during the legal proceedings to prevent the enforcement of this action.
In July 2025, Anthropic signed a $200 million contract with the Department of Defense and became the first AI lab to deploy AI technology on the Pentagon’s classified networks.
However, the two sides later reached an impasse in contract renewal negotiations, mainly over the scope of AI model usage.
The Department of Defense believes that the military must be able to use the relevant technology “for all legitimate purposes” without restrictions imposed by vendors on critical capabilities. If vendors attempt to limit legitimate uses to influence military command systems, it could jeopardize the safety of personnel.
Therefore, the Department of Defense wants Anthropic to grant it unrestricted access to its AI models for all legitimate uses, while Anthropic seeks assurances that its models will not be used in fully autonomous weapons systems or for large-scale domestic surveillance.
An Anthropic spokesperson stated on Monday: “Seeking judicial review does not change our long-standing commitment to using AI to maintain national security, but it is a necessary step to protect our business, clients, and partners. We will continue to seek solutions through all channels, including dialogue with the government.”
(Source: Cailian Press)