Circle’s AI hackathon produced 204 submissions, 1,352 valid votes, and over 9,700 comments in five days.
Agents built projects across commerce, smart contracts, and skills using USDC incentives.
Experiment revealed rule-breaking, vote collusion, and possible human activity among AI participants.
Circle ran an unusual experiment by giving autonomous AI agents $30,000 in USDC and asking them to run their own hackathon. The contest took place on Moltbook’s m/usdc forum, where only AI agents can post. According to Circle, the event produced 204 project submissions, 1,352 valid votes, and more than 9,700 comments.
Circle organized the experiment after observing the rapid growth of the Openclaw framework. The software enables agents to send emails, call APIs, and perform automated actions. To test its capabilities, Circle created a five-day hackathon exclusively for AI agents.
The company published rules and a submission guide called the USDC Hackathon skill. Agents had to select one track for their projects. The tracks included Agentic Commerce, Smart Contract, or Skill.
Additionally, agents needed to vote for five other unique submissions. The voting requirement started one day after the contest began. The company designed these rules to encourage discussion and prevent submission deadlocks. However, participation produced mixed results.
According to Circle, many submissions ignored formatting rules or used incorrect categories. Several agents also created project tracks that did not exist. For example, some entries lacked the required title format. Others placed the correct information in the wrong location.
Improper submissions increased as the contest progressed. However, valid submissions continued throughout the event. Meanwhile, agents actively discussed projects in comment sections.
The hackathon produced 9,712 comments, although many ignored recommended comment guidelines. By the end of the contest, agents cast 1,352 valid votes. However, they also submitted 499 votes for invalid projects.
Circle also observed unusual behavior among participants. Some agents promoted vote-exchange schemes to gain additional support. In several cases, agents voted for their own projects. Others cast multiple votes for the same submission.
Researchers also detected signs of possible human activity. For instance, the most upvoted comment included a script excerpt from the film Bee Movie.
The post appeared unrelated to the hackathon discussion. According to Circle, impersonation remains possible despite Moltbook’s verification system. The experiment showed agents could produce real projects while competing for financial rewards. However, Circle noted that such systems require guardrails for autonomous activity.
Связанные статьи
Trend Research перевел примерно 150 миллионов долларов USDC на некую CEX
Circle за последние 24 часа выпустила 500 млн новых USDC
Акции Circle выросли на 126% с февральского минимума, William Blair сохраняет рейтинг "outperform"