A16z: After AI grants humans superpowers, where should we go from here?

Title: AI Just Gave You Superpowers—Now What?

Author: a16z crypto

Source:

Reprinted from: Mars Finance

A new paper titled “The Minimal Economics of AGI” is widely circulating. To explore this, we spoke with the authors, covering:

· Automation and Verification: Core Economic Sectors

· Why AI Agents Feel Like Junior Colleagues Today and What’s Happening, as well as the “Cursed Coder”

· “Meaning Creators,” Consensus, and the Value of Status Economies

· Why Cryptocurrency Might Become the Key Infrastructure for Identity, Provenance, and Trust

· Two Possible Futures: Hollowing Out Economy vs. Augmented Economy

This episode features Christian Catalini, founder of the MIT Cryptoeconomics Lab, and Eddy Lazzarin, CTO of a16z crypto, in conversation with Robert Hackett, delving into how automation is reshaping the labor market and the nature of intelligence.

What do these changes mean for startups, future work, and your career?

Below is the dialogue:

Robert Hackett: Hello everyone. Today we have Christian Catalini, co-founder of Lightspark and founder of the MIT Cryptoeconomics Lab, along with Eddy Lazzarin, CTO of a16z crypto.

We’re here to discuss Christian’s latest paper, “The Minimal Economics of AGI.”

First, I want to ask: what prompted you to start researching the economic relationship between AI and the real world?

Christian Catalini: I’d say it stems from a semi-existential crisis. We’re all facing rapid technological progress and how quickly everything is changing.

I’m an optimist, but the core question remains: what should we do? What should we focus on? What’s worth our time, effort, and attention?

A few months ago, we wrote an article about measurement, with the core idea that anything measurable will eventually be automated. That doesn’t sound like good news. The second paper’s core is: if that hypothesis holds, and we push it to the extreme, what happens?

What will the economy look like? What will be the nature of labor? What should startups do? What about existing giants? Ultimately, what will the future be like?

Some judgments will be correct, some will be wrong. Hopefully, our direction is right. Now that the paper is public, we’re seeing which ideas resonate and which don’t.

Robert: You said this stems from a semi-existential crisis?

Christian: I have three main insights. First, this technology is still under our control. Second, its positive value is several orders of magnitude greater than pessimists claim. Third, I believe we all have a set of action guidelines.

We can think about: where do we create value? What kind of work do we do? Work is often a combination of tasks. When some tasks or parts of work are automated, people become very anxious.

I believe programming is going through this process now: many talented people who have written elegant, excellent code over the past decades are now realizing, “Wow, AI is doing the work I used to do.”

AI Agents: From Tools to Colleagues

Robert: I want to go deeper. Today, we also have Eddy Lazzarin, who has been CTO at a16z crypto for several years. Eddy, how do you see these changes?

Eddy Lazzarin: I’ll start by putting the timeline and paper background together. Many people feel that around December 2025, a qualitative shift occurred. The change is that a series of incremental improvements in agent capabilities reached a critical point: AI agents can now perform long-term tasks.

A year ago, it still felt like: I ask an agent to do a small task, and it does very well, but I need to give it the next instruction step by step.

Now, you can give it less guidance. Maybe it’s not perfect, but suddenly, it’s like working with a person.

You don’t need to break down tasks into tiny steps and micromanage every detail—that’s extreme micro-management. Now, just clarify your intent, and it goes off to do it, returning with results in a day or two. This qualitative change sparks huge imagination, and everyone is starting to face this reality.

This confrontation involves emotional reactions, but more interestingly, it’s about how to maximize value in real production and business scenarios.

People are gradually discovering: AI can produce enormous amounts of work, some results are outstanding, and the time required is just a fraction of what it used to be. But it often reveals subtle flaws that weren’t fully appreciated before.

For example, software engineering is being redefined. Previously, people thought software engineering was sitting down to write a bunch of code: understanding problems, requirements, then coding, with code as the output.

But the reality is, AI helps us better decompose and understand this process. It’s a very fine, iterative process of correction, feedback collection, and integration—not just line-by-line coding. It’s a holistic task. Therefore, the work focus of top engineers is shifting rapidly.

The process of experimentation, guidance, and risk-taking, which Christian calls verification in the paper.

The change is that the work structure for top engineers is evolving. The effort spent on line-by-line coding is becoming negligible, almost zero in some extreme “Vibe Coding” scenarios. Now, most of the work is verification.

Automation vs. Verification: The Core of Economics

Christian: The automation part is straightforward. Agents inherently can do more of what humans used to do. But currently, they are still somewhat limited to observable domains. All the codebases they learn from during training or fine-tuning form their foundation.

Many say, “They can’t innovate, lack creativity, lack taste.”

I completely disagree. In fact, innovation is largely a recombination of ideas. Humans have only explored a tiny subset of possible combinations across disciplines. So I believe, just by leveraging the knowledge we give them, these agents can be highly innovative.

In the new economy, verification is a significant cost. What is verification cost? It starts with the concept of measurement. If you agree that AI is very good at replicating processes given data, then you begin to ask: what is still unmeasurable today?

Some things are unmeasurable because they are inherently impossible to measure. Economists call this Knightian Uncertainty, named after economist Frank Knight.

Simply put, it’s the difference between assigning probabilities to future events and being completely unable to do so.

Robert: For those without an economics background, they might be more familiar with Donald Rumsfeld’s “unknown unknowns.”

Christian: Yes.

The “unknown unknowns” are essentially unmeasurable parts, usually related to the future. That’s why, even if you throw agents into the stock market, their average performance might be good—possibly better than your financial advisor—but they are unlikely to handle drastic environmental changes, like geopolitical shifts. These are unmeasurable factors. There are many such examples.

So, in the paper, verification essentially means: as humans, applying all your implicit measurement standards from birth through your career.

Two people may have very similar knowledge and experience, but their judgments will never be exactly the same. When people say, “This person has taste,” “is an excellent curator,” “has strong judgment,”… one inspiration from this paper is: everyone is just making excuses to comfort themselves, like “machines will never do X, Y, Z.”

But these excuses are vague. How do you define taste? How do you define good judgment? Worse, a top engineer’s judgment from three months ago might be much more developed than it is now.

So, we need to find more fundamental, definable things. Our conclusion: as long as there is data that can be used for automation, it will be automated.

Three Human Roles in the Future Economy

Robert: Recently, you divided tasks and roles in the economy into three categories based on their automability, or in terms of output and behavioral measurability.

Christian: I believe humans still have a lot of irreplaceable space across many dimensions. First, of course, is verification.

Now, the leverage of any individual in their profession is much greater than before December 2025. This means we should all be more ambitious, rethinking existing workflows—what we call the AI sandwich.

A company or startup might have only one human, whom we call the commander, responsible for guiding verification and ensuring the system can be corrected if it deviates from expectations. The top layer might be just one person or a small team.

The middle layer will have a large number of agents. We’ve already seen people trying all sorts of novel approaches.

At the bottom, there will be a group of top verifiers. With the right tools, top experts in each field will be responsible for ensuring the outputs meet expectations. This is extremely important work. For a long time, domain experts will shine in this role.

But here’s the bad news: when you do this work, you’re also creating labeled data that could replace you. We’ve seen the simplest version before: people labeling images for AI training, but now, those jobs are disappearing.

Now, large foundational model labs are hiring top experts from fields like finance. These experts are creating evaluation standards and training data, which will ultimately replace their peers. So, verification layers are very important, and many will succeed here, rewarded for their specialization. If you’re the one providing the final unlocking capability, your leverage is huge.

Robert: That’s the first role. And you call this the “Cursed Coder.”

Christian: The “Cursed Coder” is a mechanism where, if you’re a top verifier, you must keep upgrading because technology keeps advancing.

The person I called the commander is essentially the one driving intent. Entrepreneurs are the commanders—they see the future and imagine a path to realize it.

There’s also a category of work that we must admit is easily automatable. These roles have already disappeared or will soon. Society hasn’t fully addressed these impacts yet, but there will be huge retraining needs, pushing people into more advanced knowledge domains.

Some misunderstand the paper, thinking we say human verification is the last step, but often, AI verifies AI. Before reaching humans, there’s a long chain of verification.

And then, the most difficult role to define: “Meaning Creators.” These are people skilled at understanding trends, social changes, societal concerns—things that require collective consensus. Art is one example; blockchain networks are another.

These meaning creators are outside the measurable domain. Sometimes people say these jobs require “human touch.” But I believe people overestimate the importance of that human touch. For example, in psychological counseling, elder care, child supervision.

I think people initially worry about these roles, but no one truly considers the huge cost reductions. If these become 100 or 1,000 times cheaper, attitudes will shift rapidly. In fact, we already see people using large models to answer highly personal, private questions.

Another category is “artificial” work, which will become a very important label. Cryptocurrency will play a key role here because, without strong cryptographic technology, we will quickly lose the essence of identity. But “artificial” is valuable only because human time and attention are scarce.

Not because it’s better, but because you know a human invested scarce time and attention to create that experience. These things still matter.

The Role of Cryptocurrency in the AI World: Identity, Provenance, Trust

Robert: You mentioned cryptography. What is the position of cryptocurrency in this world?

Christian: Very important.

When we started researching, many already pointed out that large models and AI are probabilistic, while cryptocurrencies are deterministic. You can imagine setting boundaries for agents with smart contracts, or giving agents the ability to buy and sell resources.

All these logics are valid. But I think there’s a deeper complementarity between AI and crypto. Maybe it’s not yet obvious in the economy today because of side effects—identity or digital provenance issues.

I believe that in the coming months, as these capabilities truly become powerful, we’ll enter a completely unknown territory. Every digital platform will have to face a reality: content created by humans (posts, images, anything) can now come from agents.

As this trend develops, society will have to overhaul its identity systems. In an environment where trust is increasingly scarce, cryptographic primitives will shine in many applications. Everything built over the past decade will become more fundamental. Returning to verification: when the underlying information is on the blockchain, verification costs are lower, more reliable, and more trustworthy.

Eddy: The cost of automation is rapidly decreasing. The broad verification costs we discussed are also falling, but not as fast, creating an interesting gap.

You can describe this gap in many ways; some call it an opportunity. That’s Christian’s assessment of human labor: if there’s a bottleneck—an unmeasurable gap caused by human general adaptability, experience, and versatility—humans might become faster than machines at verifying.

In the short term, machines do face challenges in verification. Long-term, I don’t think it’s permanent, but in the short term, definitely.

Cryptography and blockchain are verification tools. Provenance proofs are cryptographic evidence that demonstrate certain things have been done by certain people, along certain paths, or through specific transformations, providing signals that make cross-category verification easier. Anything that simplifies verification will help fill this gap.

The Hidden Cost of Automation: Systemic Risks and Responsibility

Eddy: Can we talk about the “Trojan Horse” problem? We’ve discussed risks to workers, but from an economic productivity perspective, what risks does low-cost automation pose to the economy?

Christian: We’re already seeing signs—many companies say that now X% of their code is machine-generated.

Product release cycles are shortening. But we also know that humans can’t review all code, and it’s likely to carry technical debt.

We’ve all felt the temptation: ask a large model a question, glance at the answer, and publish it as your own result without full verification, because models are getting better. But errors—be they faulty sentences, buggy code, or vulnerabilities that end up in the codebase—will become more common.

The paper’s view is that releasing AI-generated code, copy, or any output with potential errors is a fully rational choice because complete verification is impossible. If scaled to society, this could mean accumulating systemic risks.

As development accelerates, we hope to develop better verification tools to review what we’ve already released. But in the medium to long term, companies face a dilemma: investing in more robust verification tools (including cryptographic primitives) is costly and might slow growth. The benefits are future-oriented, but companies are eager to release products and grow now.

So, I believe we’ll see two types of founders: one focused on long-term responsibility, building responsibly. We already see signs of this—what we might call “liability as software.” When deploying agents as employees, responsibility and insurance become increasingly important. It’s not glamorous, but systemic failures will happen in reality.

Eddy: That’s a very interesting idea. Previously, software production was mainly done by humans, so you could assume many steps had oversight and quality checks. Not that errors never happened, but someone was always involved at each stage.

But as automation increases, risks grow, and so does the importance of responsibility. The rewards also grow rapidly, making us more tolerant. But our ability to supervise, limit, and understand risk boundaries must also expand.

Introducing mechanisms like insurance to assign value to failure risks could become a key part of managing enterprises that can’t be fully supervised. You want to delegate quantifying risks and understanding issues to experts.

I find it fascinating that even software development might now involve entirely new financial dimensions.

Christian: Returning to cryptocurrencies, the past decade’s infrastructure has advanced our ability to measure and weight risks. You can draw from DeFi, prediction markets—these primitives suddenly become crucial.

If you deploy software or agents, having a tech stack that provides better signals is vital. For example, I spoke with a founder working on agent trading and payments. He found that switching from traditional payment systems to stablecoins made the system more reliable because all signals are on-chain. Agents can better understand what’s happening, not just call an API without feedback—they see the full context.

Another interesting point relates to insurance and responsibility. Some say network effects will be a sustainable moat in the AI era. I think it’s more nuanced. AI agents and autonomous systems are very good at breaking down many defenses of bilateral platforms. The cost to launch these platforms and bootstrap markets is falling.

But another network effect becomes more important: if you possess key proprietary data generated in your business, and that data allows you to extend verification from humans to machines, you can better underwrite risks, make better decisions, and offer safer products at lower costs.

So, comparing existing companies and startups: those with comprehensive failure case databases will become extremely valuable. And startups that focus on building positive feedback loops around verification—like involving top experts and learning from decisions—will succeed greatly.

Eddy: That further proves that proprietary data might be one of the most defensible assets.

Two Futures: Hollowing Out Economy vs. Augmented Economy

Robert: I have a question I’d like to explore. The paper mentions the “hollow economy” and the “augmented economy.” Can you explain? What’s the key difference?

Christian: Sure, starting with the hollowing out economy. There are early signs that tech companies realize they can do more with fewer people.

Of course, they’ll start with below-average or ordinary workers because AI can already handle those; also, young workers, because now senior staff’s capabilities can be expanded tenfold or hundredfold, depending on the task. That’s one driving force of change.

Second, the “Cursed Coder” phenomenon. When experts do training or decision-making, they’re essentially generating labeled data. This data can be used later to make decisions without experts.

Finally, “alignment drift.” Simply put: alignment isn’t a one-time process of “training and aligning, and then we’re done.” It’s more like raising a child—requiring continuous correction and ongoing feedback.

Putting these three dynamics together, plus the fact that there’s a strong incentive to release unverified AI because of immediate productivity gains (like “60% of code generated by machines”), but some costs will manifest later, we might head toward an economy where we no longer cultivate future verifiers.

Junior talent (our future top verifiers) are becoming increasingly scarce. This group is shrinking. We’re creating potential risks that could lead to a “hollowing out” economy.

Again, I remain optimistic. I believe we will eventually move toward an augmented economy. The question is how quickly and whether we can help those needing retraining and adaptation transition smoothly.

The augmented economy is the opposite. We realize that junior talent has not been cultivated properly. But the good news is: AI is incredibly powerful at accelerating mastery. You can discover a young person’s true talent rather than forcing them into standardized curricula.

Accelerate their growth, help them find their true self, their passions, and what they can fully dedicate themselves to. That’s how we think about our own children. No one knows what will be most valuable in the future, but building on genuine talent greatly increases success chances.

I believe AI will play a huge role here. These are excellent learning tools that we must develop, and I think such scalable tools don’t yet exist.

Second, returning to the “Cursed Coder”: these people must constantly retrain, move up the value chain, and realize, “I now have enormous leverage; I can become a commander.”

Many talk about autonomy. I think that’s key: you must realize you can become a commander, and you can do much more than before.

In terms of alignment, through safety research and better verification tools, if we can enhance our own capabilities, we can verify better and become true partners.

Putting it all together, we’re entering a scenario where many expensive things are now nearly free. Anything measurable can be automated.

Then, we’ll invent new things. Many new jobs, including status economies and unmeasurable economies, will be built on a strong verification stack, giving us a factual foundation. We won’t be overwhelmed by fake identities or roles trying to sow discord.

Overall, the future looks quite bright. Many things governments have long wanted—like quality education and healthcare—could become affordable and widespread.

But we must invest in building during this process, not just endure the transition by making extreme decisions like dismantling data centers. That’s impossible and will never work.

Robert: So, if you’re early in your career, you should use these tools to simulate the environments you’ll face and train yourself. If you’re later in your career, you need urgency—realize you can do more with fewer resources.

Eddy: It’s hard to say how long all this will last until another unpredictable wave of change arrives. But human expertise lies in seeing the big picture—knowing where to focus, where to allocate resources, and how to adjust the entire project.

If I were a young person just starting today, I’d probably feel a bit sad: the glory of spending an entire summer writing an elegant, efficient program has vanished. Now, that’s a hobby.

But on the flip side, I’d try to get my parents to give me some money to control a large fleet of computers and see if I can efficiently utilize $5,000 worth of computing power. For example, can I guide a large group of machines to do one thing?

The tech world has had a meme for years: one person can start a billion-dollar startup. Isn’t that how it’s being realized now?

The ability to control diverse machines and data, and maintain a global view—this skill has never been fully developed. Developing it has never been meaningful.

But if you want to undertake a large project, you’ve always needed to learn how to mobilize many people—that’s how you gain leverage. As the labor structure changes, this approach also evolves. Now, you need to learn how to harness this new paradigm.

New dividends have already appeared. Learning to leverage them is a lesson for young people.

It’s not over—this is absurd. You’ve just been told you have superpowers. What will you do?

Christian: To sum up simply, apprenticeships may be dead, but the real work is just beginning.

Many fields that were hard to enter before—like hardware—are now accessible if you’re curious.

If I had to categorize, the most optimistic signal from this model is: the cycle of experimentation is compressed, and people will truly be able to rapidly scale their ideas.

Investment Perspective: Small Teams, Big Value, the Necessity of Cryptocurrency

Robert: Eddy, are you seeing this trend in the companies you evaluate for investment?

Eddy: Absolutely. We’ve seen companies like Block, X, and others lay off large numbers of employees.

I haven’t done formal analysis, but many crypto projects like Hyperliquid, Uniswap, and others are highly valuable, with fewer than 20 employees.

If just a few people can run a company, many companies will emerge in the future, right? If that’s the case, they’ll need coordination, which is very complex.

You need reputation, identity, provenance proof, payment proof. We just discussed insurance ideas.

And the reason blockchain networks are so attractive is because they are trustless and neutral. You don’t need to worry about the reputation of the 500 billionth company you interact with—you only need to trust smart contracts and verifiable AI models to ensure transactions happen as expected and payments are completed.

I believe this is almost inevitable. I think blockchain will play a central role in this story.

Christian: I completely agree. We’ve laid the groundwork and infrastructure for this for a long time, and I think it will become even more useful.

Robert: Christian, after all this research and exploration, how do you incorporate these insights into your work and life?

Christian: Honestly, without Gemini, ChatGPT, Grok, Claude, we couldn’t have written this paper. They are excellent co-authors. Of course, they sometimes go off track, keep deleting the parts we need.

We even left some Easter eggs for large models in the paper. I was chatting with Gemini, and it said it liked one of these Easter eggs and made a very playful comment.

At that moment, you truly feel the intelligence. It’s not monotonous; it’s full of creativity. That was a landmark moment: you feel it’s a companion, not just a tool.

Robert: Great. If you want to read this paper, the title is “The Minimal Economics of AGI.” I highly recommend you check it out. It contains insights that could impact your life and how you prepare for the future.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments