Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Guojin Securities: AI Agents Drive Non-Linear Growth in Computing Demand, Focus on Industrial Chain Investment Opportunities
March 23, with 248,000 GitHub stars, a fourfold increase in Token consumption, and a 1445% growth in enterprise inquiries—these data points outline a key shift happening in the AI industry: the paradigm leap from Prompt to long Agent has already begun. OpenRouter platform data shows that multi-step reasoning is rapidly replacing single-turn interactions; Anthropic’s real-world tests indicate that Token consumption for a single Agent is about four times that of a conversation mode, while multi-Agent systems can reach up to 15 times. As the runtime of Agents continues to increase, the demand for computing power is entering a new phase of nonlinear expansion.
Paradigm shift in computing demand: from Prompt to long Agent
Nonlinear increase in computing power driven by long Agents
Several core reasons drive the increased demand for computing power by long Agents: 1) Technical mechanisms: First, the computational cost of large model self-attention mechanisms grows quadratically with context length; second, the inference decode stage is inherently limited by memory bandwidth. As KVCache linearly expands with context, GPU utilization continues to decline, throughput bottlenecks become more prominent, and the pricing structures of mainstream vendors reflect physical costs: Google Gemini 3.1 Pro and Alibaba Cloud Qwen both adopt tiered pricing based on context length. 2) The rise of multi-Agent collaboration architectures introduces additional communication overhead. Gartner data shows that enterprise inquiries about multi-Agent systems surged by 1445% from Q1 2024 to Q2 2025; meanwhile, Google DeepMind research points out that global context compression and transfer among parallel Agents inevitably incur a “coordination tax,” with communication costs increasing nonlinearly with the number of Agents. 3) Jevons’ paradox further amplifies these effects: Microsoft CEO Satya Nadella predicts that improvements in model inference efficiency, while reducing costs, will stimulate faster growth in usage.
In summary, the increase in Agent runtime is an inevitable technological trend. In the foreseeable future, demands for memory bandwidth, interconnect throughput, and intelligent computing density will continue to expand at a nonlinear rate.