Anthropic's appeal fails, the Pentagon's "supply chain risk" label remains unchanged

robot
Abstract generation in progress

Deep Tide TechFlow news, April 9, according to CoinTelegraph, the United States Court of Appeals for the District of Columbia Circuit rejected an emergency request by AI company Anthropic, maintaining the Department of Defense’s finding of “national security supply chain risk.” The three judges said that the government’s interest in regulating AI technology during a military conflict outweighed the financial and reputational losses Anthropic could suffer. This label had never previously applied to U.S.-based companies, and it will prevent Pentagon contractors from using Anthropic’s Claude models.

The dispute began with a contract signed between the two sides in July 2025; negotiations broke down in February 2026. The government demanded that Anthropic allow Claude to be used for military purposes without restrictions, while Anthropic insisted on opposing its use in lethal autonomous weapons and domestic mass surveillance. Trump later ordered federal agencies to fully discontinue Anthropic products, and Anthropic filed a lawsuit in March. The case is currently being pursued in parallel along two legal tracks: in the U.S. District Court, Northern District of California, and in the District of Columbia.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin