Futures
Hundreds of contracts settled in USDT or BTC
TradFi
Gold
Trade global traditional assets with USDT in one place
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Participate in events to win generous rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and enjoy airdrop rewards!
Futures Points
Earn futures points and claim airdrop rewards
Investment
Simple Earn
Earn interests with idle tokens
Auto-Invest
Auto-invest on a regular basis
Dual Investment
Buy low and sell high to take profits from price fluctuations
Soft Staking
Earn rewards with flexible staking
Crypto Loan
0 Fees
Pledge one crypto to borrow another
Lending Center
One-stop lending hub
VIP Wealth Hub
Customized wealth management empowers your assets growth
Private Wealth Management
Customized asset management to grow your digital assets
Quant Fund
Top asset management team helps you profit without hassle
Staking
Stake cryptos to earn in PoS products
Smart Leverage
New
No forced liquidation before maturity, worry-free leveraged gains
GUSD Minting
Use USDT/USDC to mint GUSD for treasury-level yields
Anthropic-Pentagon clash escalates over military AI terms - Brave New Coin
At the center of the standoff is Anthropic’s refusal to remove safeguards that block use of its models for autonomous weapon targeting and U.S. domestic surveillance. Reuters says Pentagon officials have argued that government use should be constrained by U.S. law, not a private company’s usage policy. Axios separately reported that Pentagon negotiators are pushing for an “all lawful purposes” standard in discussions not only with Anthropic, but also with other major AI labs.
According to Reuters, Hegseth presented Anthropic with an ultimatum during this week’s meeting, with options including designating Anthropic a supply-chain risk or invoking the Defense Production Act to force changes. Reuters also reported the Pentagon gave Anthropic a deadline of Friday at 5 p.m. to respond. The commercial implications are significant because Anthropic has been deeply integrated into sensitive government workflows, and Reuters notes it was until recently the only large language model provider on classified U.S. networks.
Anthropic has kept its public messaging measured. A company spokesperson told Reuters the talks “continued good-faith conversations” and were aimed at ensuring Anthropic can support national security “reliably and responsibly.” Axios also quoted an Anthropic spokesperson saying the company is in “productive conversations, in good faith” with the Department of War on how to handle “new and complex issues.”
The Pentagon’s side has also framed the issue as operational rather than ideological. Axios quoted chief Pentagon spokesman Sean Parnell saying the relationship is under review and that partners must be willing to help U.S. forces “win in any fight.” Axios further quoted a senior Pentagon official warning that disentangling Anthropic would be painful and that the department would “make sure they pay a price” if forced to do so.
For investors and enterprise buyers, the dispute matters beyond one contract. Reuters and Axios both indicate this is a precedent-setting battle over whether frontier AI providers can enforce product-level guardrails once their systems are embedded in national security environments. Reuters also cited government-contracts lawyer Franklin Turner, who said punitive action against Anthropic would be “unprecedented” and likely trigger litigation.
In business terms, this is no longer just an AI ethics debate. It is a power struggle over who sets the operating rules for strategic software: the vendor, the customer, or the state.