🎉 Share Your 2025 Year-End Summary & Win $10,000 Sharing Rewards!
Reflect on your year with Gate and share your report on Square for a chance to win $10,000!
👇 How to Join:
1️⃣ Click to check your Year-End Summary: https://www.gate.com/competition/your-year-in-review-2025
2️⃣ After viewing, share it on social media or Gate Square using the "Share" button
3️⃣ Invite friends to like, comment, and share. More interactions, higher chances of winning!
🎁 Generous Prizes:
1️⃣ Daily Lucky Winner: 1 winner per day gets $30 GT, a branded hoodie, and a Gate × Red Bull tumbler
2️⃣ Lucky Share Draw: 10
Is AI business not profitable? The dawn of DeAI is already here.
Written by: Zhang Feng
Artificial Intelligence (AI) is undoubtedly the hottest technological trend globally, with AI technology reshaping various industries at an unprecedented pace. However, behind the booming hype lies a harsh reality: the vast majority of AI businesses, especially startups, have not found a stable and sustainable path to profitability. They are trapped in a predicament of “getting praise but not generating revenue,” where technological prosperity coexists with commercial losses.
The profitability dilemma of AI business does not stem from the failure of the technology itself, but rather from its centralized development model, which has led to structural contradictions. Specifically, it can be summarized into the following three main reasons:
Extreme Centralization: Sky-high Costs and Oligopoly. The current mainstream AI, especially large models, is a typical “heavy asset” industry. Its training and inference processes require enormous amounts of computing power (GPU), storage, and electricity. This leads to polarization: on one end are the technology giants with substantial capital (such as Google, Microsoft, OpenAI), capable of bearing investments of hundreds of millions or even billions of dollars; on the other end are numerous startups that have to “tribute” the vast majority of their funding to cloud service providers to obtain computing power, severely squeezing profit margins. This model creates a “computing power oligopoly,” stifling innovation vitality. For example, even OpenAI, in its early development, heavily relied on Microsoft's massive investments and Azure cloud computing resources to support the research and operation of ChatGPT. For the vast majority of players, high fixed costs make it difficult to achieve scalable profitability.
Data Dilemma: Quality Barriers and Privacy Risks. The fuel of AI is data. Centralized AI companies typically face two major challenges in obtaining high-quality, large-scale training data. Firstly, the cost of data acquisition is exorbitant. Whether through paid collection, data labeling, or utilizing user data, it involves huge financial and time investments. Secondly, the risks of data privacy and compliance are significant. With the tightening of global data regulations (such as GDPR and CCPA), collecting and using data without explicit user authorization can lead to legal lawsuits and hefty fines at any moment. For example, many well-known tech companies have faced astronomical fines for data usage issues. This creates a paradox: without data, AI cannot develop, but acquiring and using data is fraught with difficulties.
Value distribution imbalance: Contributors and creators are excluded from the profits. In the current AI ecosystem, value distribution is extremely unfair. The training of AI models relies on the behavioral data generated by countless users, the content produced by creators (text, images, code, etc.), and the open-source code contributed by global developers. However, these core contributors can hardly receive any returns from the enormous commercial value created by AI models. This is not only an ethical issue but also an unsustainable business model. It undermines the enthusiasm of data contributors and content creators, and in the long run, it will erode the foundation of continuous optimization and innovation of AI models. A typical case is that many artists and writers have accused AI companies of using their works for training and profiting without providing any compensation, which has sparked widespread controversy and legal disputes.
DeAI (Decentralized AI) is not a single technology, but a new paradigm that integrates blockchain, cryptography, and distributed computing. It aims to reconstruct the production relations of AI in a decentralized manner, thereby specifically addressing the three major pain points mentioned above and opening up the possibility of profitability.
DeAI distributes computing power demand through a “crowdsourcing” model to idle nodes around the world (personal computers, data centers, etc.). This is similar to “Airbnb for GPU,” creating a global, competitive computing power market that can significantly reduce computing costs. Participants receive token incentives for contributing computing power, achieving optimized resource allocation.
DeAI achieves “data does not move, models do” through technologies such as “federated learning” and “homomorphic encryption”. It does not require the original data to be centralized; instead, it distributes the model to various data sources for local training, only aggregating the encrypted parameter updates. This fundamentally protects data privacy while legally and compliantly utilizing the value of decentralized data. Data owners can independently decide whether to provide data and profit from it.
DeAI has built a transparent and fair value distribution system through “token economics” and “smart contracts”. Data contributors, computing power providers, model developers, and even model users can automatically receive corresponding token rewards based on their contribution through smart contracts. This transforms AI from a “black box” controlled by giants into an open economy co-built, co-governed, and co-shared by the community.
Migrating traditional centralized AI businesses to the DeAI paradigm requires systematic restructuring at the technical, business, and governance levels.
(1) Technical reconstruction from centralized to distributed
The computing power layer relies on decentralized physical infrastructure network (DePIN) projects, such as Akash Network and Render Network, to build flexible and low-cost distributed computing power pools, replacing traditional centralized cloud services.
The data layer uses federated learning as the core training framework, combined with cryptographic techniques such as homomorphic encryption and secure multi-party computation to ensure data privacy and security. Establish a blockchain-based data market, such as Ocean Protocol, to facilitate transactions of data under the premise of rights confirmation and security.
The model layer deploys the trained AI model on the blockchain in the form of an “AI smart contract”, making it transparent, verifiable, and callable without permission. Every use of the model and the generated revenue can be accurately recorded and distributed.
(2) Business reconstruction from selling services to ecological co-construction
From SaaS to DaaS (Data as a Service) and MaaS (Model as a Service), companies are no longer just selling API call counts but are acting as builders of ecosystems. By issuing functional tokens or governance tokens, they incentivize community participation in network construction. Revenue sources have expanded from a single service fee to token appreciation brought about by ecological value growth, trading fee dividends, and more.
Therefore, building a decentralized task platform where tasks such as data annotation, model fine-tuning, and specific application development are published in the form of “bounties” for global community members to undertake and receive rewards, greatly reduces operational costs and stimulates innovative vitality.
(3) Governance Reconstruction from Company System to DAO
Based on community governance, participants in the community (contributors, users) who hold governance tokens have the right to vote on key decisions, such as the direction of model parameter adjustments, the use of treasury funds, and the priority of new feature development, etc. This achieves a true “users as owners.”
Based on openness and transparency, all codes, models (partially open-source), transaction records, and governance decisions are put on the chain to ensure the process is open and transparent, establishing a trustless collaborative relationship. This itself is a powerful brand asset and a trust endorsement.
Taking the transformation from a traditional logistics data platform to DeAI as an example, the dilemma of traditional logistics data platforms lies in the fact that, although they aggregate data from various sources such as maritime transport, land transport, and warehousing, participants are “reluctant to share” due to concerns about the leakage of commercial secrets, resulting in data silos and limited platform value. The core of the transformation to DeAI is to unlock data value and provide fair incentives without exposing the original data.
Technically build a trusted computing network. The platform no longer centrally stores data, but transforms into a blockchain-based coordination layer. By adopting technologies such as federated learning, AI models are “air-dropped” to the local servers of various enterprises (such as shipping companies and warehouses) for training, only aggregating encrypted parameter updates to jointly optimize the global prediction model (such as cargo ship arrival times and warehouse congestion risks), achieving “data stays put, value moves.”
Promote data assetization and token incentives in business. Issue utility tokens for platforms, where logistics companies “mine” to earn token rewards by contributing data (model parameters). Downstream customers (such as cargo owners) pay tokens to query high-precision “forecast results” (for example: the on-time rate for a specific route in the coming week), rather than purchasing raw data. Revenue is automatically distributed to data contributors through smart contracts.
Build an industrial DAO in governance, where key decisions (such as new feature development and fee adjustments) are voted on collectively by token holders (i.e., core participants), transforming the platform from being led by a private company to an industrial community.
The platform has transformed from a centralized entity attempting to extract data intermediary fees into a neural system that co-builds, co-governs, and shares the entire logistics industry chain, significantly enhancing industry collaborative efficiency and risk resistance by addressing trust issues.
Despite the broad prospects of DeAI, its development is still in the early stages and faces a series of significant challenges.
Compliance and legal uncertainty. In terms of data regulations, even if the data does not move, models such as federated learning still need to strictly adhere to requirements regarding “purpose limitation,” “data minimization,” and user rights (such as the right to be forgotten) in regulations like GDPR when processing personal data. Project parties must design compliant data authorization and withdrawal mechanisms.
In terms of securities regulations, the tokens issued by the project can easily be classified as securities by regulatory authorities in various countries (such as the US SEC), leading to strict regulatory scrutiny. How to avoid legal risks when designing the token economic model is crucial for the project's survival.
In terms of content responsibility, if a DeAI model deployed on the chain produces harmful, biased, or illegal content, who is the responsible party? Is it the model developer, the computing power provider, or the governance token holders? This brings new challenges to the existing legal system.
In terms of security and performance challenges, model safety refers to the fact that models deployed on public chains may face new types of attack vectors, such as exploiting vulnerabilities in smart contracts or maliciously sabotaging federated learning systems through poisoned data.
Performance bottlenecks refer to the transaction speed (TPS) and storage limitations of the blockchain itself, which may not support high-frequency, low-latency large model inference requests. This requires an effective combination of Layer 2 scaling solutions and off-chain computation.
Collaborative efficiency in distributed cooperation may be fair, but the efficiency of decision-making and execution might be lower than that of centralized companies. Finding a balance between efficiency and fairness is an art that DAO governance needs to continuously explore.
DeAI, as a revolution in production relations, is expected to break the monopoly of giants and release idle computing power and data value globally through distributed technology, token economy, and community governance, thereby building a fairer, more sustainable, and potentially more profitable new AI ecosystem.
The current development of AI tools is still quite a long way from achieving the ideal of decentralized artificial intelligence. We are still in the early stage dominated by centralized services, but some explorations have already pointed to the direction of the future.
Current exploration and future challenges. Although the ideal DeAI has not yet been realized, the industry is already making valuable attempts, which help us see the future path and the obstacles that need to be overcome.
As a prototype for the collaboration of multi-agent systems. Some projects are exploring the construction of environments where AI agents can cooperate and evolve together. For example, the AMMO project aims to create a “symbiotic network between humans and AI,” with its designed multi-agent framework and RL Gyms simulation environment allowing AI agents to learn cooperation and competition in complex scenarios. This can be seen as an attempt to establish the underlying interaction rules for the DeAI world.
For example, there are initial attempts at incentive models. In the concept of DeAI, users who contribute data and nodes that provide computing power should receive fair rewards. Some projects are trying to redistribute value directly to contributors in the ecosystem through a cryptographically-based incentive system. Of course, how this economic model can operate on a large scale, stably, and fairly remains a significant challenge.
For example, moving towards more autonomous AI: Deep Research type products demonstrate the powerful autonomy of AI in specific tasks (such as information retrieval and analysis). They can autonomously plan, execute multi-step operations, and iteratively optimize results. This capability for task automation is the foundation for AI agents to work independently in the future DeAI network.
For AI practitioners struggling in the Red Sea, rather than getting stuck in the old paradigm, it's better to bravely embrace the new Blue Ocean of DeAI. This is not just a shift in technological direction, but a reshaping of business philosophy – from “extraction” to “incentivization,” from “closure” to “openness,” and from “monopolized profits” to “inclusive growth.”