DeepMind CEO laments that AI commercialization is moving too fast: if labs had been allowed to stay for a few more years, humanity might already have conquered cancer.

Google DeepMind CEO Demis Hassabis laments that AI commercial competition is too hasty; if the technology had been refined in laboratories for a few more years, humanity might have already conquered cancer.

AI is rapidly transforming humanity, with new technologies and tools emerging every few weeks or even days, but Demis Hassabis, CEO of Google DeepMind and 2024 Nobel laureate in Chemistry, believes that the pace of AI competition is too rushed. If he had his way, AI would spend more years in labs honing its capabilities, and perhaps humans would have already solved cancer.

Hassabis shared this sentiment during a podcast hosted by video journalist Cleo Abram, expressing his frustration with current AI development. In a past interview with Time magazine, he described himself as a scientist, emphasizing that his exploration of AI is driven by a pursuit of knowledge and understanding of the world.

He mentioned that his original intention for entering AI was not to create chatbots, but to accelerate scientific discovery. Their most renowned achievement is AlphaFold, a system that solved the “protein folding problem” that had eluded biologists for 50 years. Hassabis pointed out that this benefits over 3 million scientists worldwide, especially in research on diseases like malaria, as AI provides free structural databases, allowing researchers to skip basic experiments and move directly into drug development.

Image source: YouTube. The research成果 of AlphaFold has made Hassabis one of the Nobel laureates.

He believes that if AI had been allowed to stay in labs for a few more years, focusing on these critical issues, humans might have already made more decisive breakthroughs in cancer treatment or materials science.

Cutting-edge technology reaches the public within months, but key problems lose resources

In the interview, Hassabis outlined his ideal path for AI development—what he calls the “CERN model.” He hopes that the development of artificial general intelligence (AGI) can proceed like the operation of CERN’s Large Hadron Collider—rigorously, cautiously, and thoughtfully, applying scientific methods to ensure progress only after thorough understanding of each step.

However, reality has diverged from Hassabis’s ideal script. The explosion of ChatGPT and breakthroughs in generative AI at the end of 2022 sparked a chaotic global business race. He admits that this situation has accelerated AI deployment, with advanced technologies reaching the public within months, but it has also diverted resources away from truly critical issues.

To gain market and technological leadership, development has been forced into high-speed progress. Hassabis confesses that they can no longer develop technology with the philosophical reflection and cautious evaluation he once envisioned.

While AI chatbots are useful for summaries and brainstorming, they inherently still have flaws like hallucinations. Yet, commercial pressures have pushed these experimental products rapidly into the mainstream market. This has resulted in a large portion of R&D focus and resources being channeled into the release cycles of general foundational models aimed at mass users.

To balance reality and ideals, Hassabis adopts a more pragmatic approach—leading Google’s consumer AI products like Gemini, while also investing in applied AI (Narrow AI). He believes that we don’t need to wait for artificial general intelligence; systems like AlphaFold that solve specific problems can already bring tangible benefits to energy, materials science, and healthcare.

AlphaGo’s legendary move reveals AI’s potential to surpass human thinking

Hassabis’s confidence in AI largely stems from the 2016 AlphaGo match against South Korean Go master Lee Sedol. During that game, AlphaGo played the famous “Move 37,” a move initially dismissed as unlikely to be played by humans, yet it ultimately led AlphaGo to victory.

Image source: gogameguru.com. The move AlphaGo played, which was considered beyond human intuition, was seen by Hassabis as a breakthrough in AI’s ability to transcend human cognitive frameworks.

From this signal, Hassabis realized that AI had developed the ability to go beyond human experience and seek entirely new solutions. He aims to apply this creative capacity—surpassing human thinking—to scientific fields.

AlphaFold exemplifies this mindset. Traditional methods require hundreds of thousands of dollars and years to determine a single protein structure. AlphaFold 2 has already predicted nearly 200 million known protein structures.

Now, Hassabis is leading his team into deeper drug discovery. Traditional drug development takes about 10 years with only a 10% success rate. He founded Isomorphic Labs, which uses AlphaFold 3 and subsequent models for “virtual screening.” AI can simulate millions of compound-protein interactions in minutes, while also checking for toxicity across over 20k human proteins, allowing most failures to be filtered out in silico before laboratory testing, thus focusing only on the most promising candidates.

Concerns about AI bringing two major risks

However, as AI technology advances into an era of AI agents, Hassabis’s concerns about the future have become more concrete. He categorizes the risks into two main types. The first is “malicious actors”—individuals or nations—who might misuse technologies originally intended for curing diseases or developing new materials for harmful purposes.

The second, more sci-fi but very real threat, is “going rogue.” When systems become extremely intelligent and autonomous, ensuring they execute human-set goals accurately and do not bypass safety measures becomes an extremely difficult technical challenge.

In response to these challenges, Hassabis calls for leading AI research institutions, governments, and academia to establish international cooperation mechanisms, emphasizing the need for more safety research on the final stretch toward AGI.

Despite regrets that AI didn’t stay longer in labs, Hassabis remains optimistic about the next 50 years. He envisions AI helping humans achieve breakthroughs in nuclear fusion, discovering room-temperature superconductors, and even reducing space travel energy costs to near zero. To him, AI is not just a technology but a magnifying glass for exploring the universe’s truths. Whatever the answers may be, he is eager to uncover the truth.

  • This article is reprinted with permission from 《Digital Age》
  • Original title: “Nobel Laureate laments ‘AI commercialization too fast’: If labs had been kept for a few more years, humans might have conquered cancer!”
  • Original author: Chen Jianjun
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin