Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
A Concern Triggered by a Lobster: The AI Agent Anxiety Behind OpenClaw's Viral Success
Written by: Deng Tong, Jinse Finance
Spring 2026 not only brings Middle East conflicts and soaring oil prices but also a lobster stirring the AI track.
On January 24, Openclaw topped Hacker News’ homepage and reached the number one spot, marking its breakout moment. By January 30, OpenClaw completed branding, officially named itself, and established a red lobster logo. In February and March, major domestic companies followed with one-click deployment, offline experience queues formed, and online discussions fermented, making OpenClaw a viral sensation.
However, as the lobster gained popularity nationwide, concerns about the “naked” lobster emerged. From “raising lobsters” to “killing lobsters,” public sentiment shifted rapidly within days, and debates over AI intelligence and safety boundaries began…
OpenClaw is essentially an open-source AI agent framework, first released by Austrian developer Peter Steinberger. It allows users to create autonomous agents capable of calling large models, accessing file systems, connecting to external applications, and executing tasks.
Unlike traditional chatbots, OpenClaw is not just a simple “question-answering AI” but an AI assistant capable of performing actual operations.
Many tech companies are building application scenarios based on OpenClaw, with some cities even introducing subsidy policies to attract related startups:
Wuxi High-tech Zone issued “Measures for Supporting the Integration of OpenClaw and Other Open Source Community Projects with OPC Community” (draft for comments), featuring 12 policies for “raising lobsters,” from basic support to industry implementation, talent cultivation, and safety compliance, with individual support up to 5 million yuan. Local cloud platforms providing free deployment and development toolkits are eligible for full subsidies up to 1 million yuan.
Hefei High-tech Zone released the “Action Plan for Building an AI OPC Startup Ecosystem Demonstration Zone” (draft for comments), with 15 robust measures to fully support the implementation of open-source AI projects like OpenClaw, aiming to establish a new model of “AI + super individuals/one-person companies (OPC).” Hefei offers a luxurious “space + talent + computing power + scenarios + capital” package, with funding support up to 10 million yuan.
Nanjing Qixia District and Jiangning District issued special support policies for OPC×OpenClaw. Qixia District released “Qixia High-tech Zone’s Measures to Support the Development of OpenClaw and Other Open Source AI Tools and OPC Integration,” providing free computing resources and subsidies for API calls to leading domestic large models for developers in the OPC community, supporting hourly and daily elastic computing to reduce early-stage technical costs. The OPC community “Zijin Star” in Zijing Mountain Science and Technology City, Jiangning District, launched the “Lobster Six Policies,” offering up to 30% subsidy based on compute contract amounts for AI computing resources, with annual compute voucher payouts capped at 2 million yuan per user.
For a time, “raising a lobster” even became a new trend in the AI circle.
But the good times didn’t last long. The hidden risks of “lobsters” drew industry attention.
Many users, during deployment, accidentally exposed control interfaces directly to the public internet. OpenClaw defaults to port 18789 for control services; without strict authentication, anyone scanning the network can easily locate it. Once these interfaces are attacked, the attacker can take control of an AI agent with system-level permissions. Tools that once provided convenience can instantly become a springboard for others to control your computer or server.
On March 10, Zhou Hongyi further discussed the development and risks of AI in a personal account, bluntly stating: “Running OpenClaw also involves data security issues; sometimes it hallucinates and might delete all files on your C drive.” He also pointed out that most organizations’ self-developed large models are still at the “chatbot” stage, far from truly “working” intelligent agents, and enterprise AI deployment requires multidisciplinary talents who understand both technology and business.
On March 11, the National Vulnerability Database (NVDB) organized industry stakeholders—including AI providers, vulnerability collection platforms, and cybersecurity firms—to propose the “Six Do’s and Six Don’ts.” It highlighted that financial transaction scenarios pose significant risks of erroneous trades or account hijacking. Countermeasures include network isolation, least privilege access, emergency manual review and circuit breakers, secondary confirmation for critical operations, supply chain audits, official component use, regular vulnerability patches, and comprehensive security monitoring.
A maintainer of OpenClaw, Shadow, warned on Discord: “If you don’t even know how to use the command line, this project is too dangerous for you and not safe to use.”
A quick online search reveals various “kill lobster” guides, with phrases like “uproot” and “lobster residue” appearing frequently. Under the shadow of security risks, the attitude has shifted rapidly from “raising a lobster” to “killing a lobster.”
Problems with OpenClaw
Excessive Permissions and Data Leakage
OpenClaw requires access to many sensitive permissions, such as emails, local files, API keys, and corporate data. Improper configuration can lead to significant risks. The agent might delete or modify files, call paid APIs, automatically send emails, or access internal enterprise data. Security researchers warn that malicious software called “information stealer” can copy OpenClaw’s installation files that connect to email and other services. Attackers gaining these could directly control the user’s OpenClaw.
Many users also connect OpenClaw to enterprise or personal data, increasing the risk of information leaks.
While OpenClaw itself is open-source and free, it does not include models; it connects to external providers like OpenAI, Anthropic, and Google. Each AI inference incurs costs. Every conversation, automation, and decision triggers API calls to these models, consuming expenses.
Monthly costs for OpenClaw range roughly from $6 to over $200, depending on deployment and usage intensity.
Specifically, total costs depend on VPS resources, large model types, automation frequency, and workflow scale. Most individual users spend $6–$13 per month, small teams $25–$50, medium or larger teams $50–$100, and highly automated setups processing thousands of interactions daily could exceed $100 monthly.
Employees deploy OpenClaw on company devices using single-line commands without approval or security review, and security operation centers (SOCs) often cannot monitor these shadow AI systems. This is the most dangerous form of shadow AI; 63% of organizations affected by AI-related security vulnerabilities lack AI governance policies. This means security teams often cannot oversee AI agents’ behavior. If an agent is attacked or misconfigured, it could become a potential vulnerability in the enterprise network.
Unlike ordinary chatbots, AI agents can perform tasks automatically. Studies show that AI agents are more prone to “circular reasoning” in complex tasks. When hallucinating, they may cause infinite loops, frequent API calls, erroneous operations, or server resource exhaustion.
OpenClaw is the first lobster to emerge from the AI wave. Despite many issues, this lobster symbolizes the future of artificial intelligence.
In recent years, AI has been ignited by ChatGPT, focusing on dialogue and content generation; then came the Copilot era, aiding work and improving efficiency; now, we are entering the AI agent era, emphasizing autonomous task execution. Under OpenClaw’s model, AI automatically completes complex steps: understanding tasks, breaking down goals, calling tools, integrating information, and ultimately completing the task. Users no longer operate software directly; they describe their goals, and AI handles task decomposition and tool invocation. While this introduces security risks, it also marks a new wave of human-AI interaction and innovation.
Unlike traditional AI software, AI agents have greater autonomy in executing tasks. If systems are attacked, permissions misconfigured, or hallucinations occur, errors or security risks can result. This is why OpenClaw’s popularity has also sparked widespread discussions on safety and governance.
Security concerns are temporary; the development and refinement of AI agents are inevitable trends. Just as ChatGPT’s emergence led to numerous dialogue and content generation tools, OpenClaw will also give rise to many AI agents.
In the future, we may see an era where everyone “raises a lobster.” Lobsters are indeed delicious, but as we enjoy the AI feast, we must first learn how not to get pinched.