Odaily Planet Daily News: The decentralized AI computing power network Gonka has completed the v0.2.9 mainnet upgrade. This upgrade was approved through on-chain governance voting and officially executed at block height 2451000. The network has fully transitioned to PoC v2 as the weighting distribution mechanism, with the original PoC logic phased out. This upgrade marks Gonka entering a more mature stage in both its computing power verification mechanism and network governance layer.
After the upgrade takes effect, Confirmation PoC becomes the authoritative source for network results, further enhancing the verifiability and certainty of computing power contributions. Meanwhile, the network has entered a single-model operation phase, standardizing models and verification criteria to reduce heterogenous computing noise and provide a more stable infrastructure environment for decentralized AI inference and training. Currently, only ML Nodes running Qwen/Qwen3-235B-A22B-Instruct-2507-FP8 with PoC v2 compatible images can participate in weight calculation. The transition period from Epoch 158 to 159 will be the first full operational phase after PoC v2 activation.
According to real-time data from GonkaScan, as of February 2, 2026, the Gonka network’s total computing power approaches 14,000 H100 equivalents, demonstrating characteristics of a national-level AI computing cluster. Compared to the approximately 6,000 H100 equivalents announced by Bitfury in early December 2025 with a $50 million investment, the network’s computing power has grown at a monthly rate of about 52%, leading the industry among similar decentralized computing networks.
In terms of computing power structure, high-end GPUs such as NVIDIA H100, H200, and A100 account for over 80% of the total network capacity, highlighting Gonka’s significant advantages in high-performance computing resource aggregation and scheduling. Currently, the network nodes cover about 20 countries and regions including Europe, Asia, the Middle East, and North America, laying the foundation for a globalized, fault-tolerant AI computing infrastructure.