Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Anthropic Releases Post-Mortem Analysis on Claude Code Quality Decline: Three Product Layer Changes, Not Model Issues
According to monitoring by Beating, Anthropic’s engineering team confirmed that the decline in quality of Claude Code reported by users over the past month stems from three independent changes at the product layer, affecting Claude Code, Claude Agent SDK, and Claude Cowork, while the API and underlying models remain unaffected. The three issues were fixed on April 7, 10, and 20, with the final version being v2.1.116. The first change occurred on March 4, where the team adjusted the default inference strength of Claude Code from high to medium to reduce occasional long delays (UI appearing frozen) under high inference loads. Users widely reported a decline in performance, leading to a rollback on April 7, with the current default for Opus 4.7 set to xhigh and other models to high. The second issue was a bug introduced on March 26, designed to clear old inference records after a session has been idle for over an hour to save on session recovery costs. A flaw in the implementation caused the clearing to execute not just once but in every subsequent round, leading the model to gradually lose previous inference context, resulting in forgetfulness, repetitive actions, and abnormal tool calls. This bug also accelerated user quota consumption due to cache misses on every request. The team stated that two unrelated internal experiments obscured the conditions for reproducing the issue, taking over a week to investigate, with a fix implemented on April 10. A subsequent code review using Opus 4.7 on the problematic PR revealed that Opus 4.7 could detect this bug, while Opus 4.6 could not. The third change was launched on April 16 with Opus 4.7, where the team added a directive to limit output length in the system prompt: “Text between tool calls should not exceed 25 words, and the final response should not exceed 100 words unless the task requires more detail.” Internal testing showed no regression for several weeks, but after launch, it compounded with other prompts to degrade coding quality, affecting Sonnet 4.6, Opus 4.6, and Opus 4.7. Expanded evaluations found a 3% decline in both Opus 4.6 and 4.7, leading to a rollback on April 20. The three changes affected different user groups and took effect at different times, presenting as widespread and inconsistent quality degradation, complicating troubleshooting. Anthropic stated that moving forward, it will require more internal employees to use the same public build versions as users, run full model evaluation suites for every modification to the system prompt, and set a gray period. As compensation, Anthropic has reset the usage quotas for all subscribed users.