TIME magazine’s in-depth report reveals that Anthropic was designated a national security supply chain risk by the Trump administration for refusing to allow Claude to be used in fully autonomous weapons systems and mass surveillance of American citizens; on the same day, OpenAI swiftly took over military contracts. This “downward competition” is testing the principles of the world’s most disruptive AI companies. This article is adapted from TIME’s “The Most Disruptive Company in the World” by Leslie Dickstein and Simmone Shah, translated and edited by Dongqu.
(Background: AI facial recognition causes wrongful imprisonment! An American grandma was jailed 1,200 miles away for half a year—no apology from police.)
(Additional context: Former Dropbox CTO’s popular article “The work I’ve done my whole life is now worthless and easily accessible.”)
Table of Contents
Toggle
In a hotel room in Santa Clara, California, five members of AI company Anthropic huddle around a laptop in tense concentration. It’s February 2025. They’re attending a nearby seminar when they suddenly receive an unsettling message: a controlled experiment indicates that the upcoming new version of Claude might assist terrorists in synthesizing biological weapons.
These individuals belong to Anthropic’s “frontier red team,” dedicated to evaluating Claude’s cutting-edge capabilities and simulating extreme scenarios—covering cyberattacks to biosecurity threats. After receiving the alert, they rush back to the room, flip the bed frame to create a makeshift workbench, and begin scrutinizing test results.
Hours of intense analysis follow, but the team cannot determine if the new product is sufficiently safe. Ultimately, Anthropic delays the release of Claude 3.7 Sonnet by ten days, until they confirm the risks are within acceptable bounds.
Ten days may seem brief, but for a company at the forefront of technology, immersed in an industry rapidly reshaping the world, it’s almost a lifetime.
Logan Graham, head of the “frontier red team,” reflects on that “bioweapons scare” as a microcosm of the pressure Anthropic faces—both for the company and the world. Anthropic is among the most safety-conscious AI labs today, yet it’s also racing to develop ever more powerful systems. Many inside believe that if these technologies go unchecked, they could trigger catastrophic consequences—from nuclear war to human extinction.
Graham, 31, still looks youthful but is unflinching in balancing the immense profits and risks of AI. He says, “Many grow up in a relatively peaceful world, and their instinct is to think there’s a room full of seasoned adults who know how to steer things back on course.”
“But the reality is, there’s no ‘adults’ room.’ That room doesn’t exist. That door doesn’t exist. Responsibility falls on you alone.” If that’s not enough to shock, he describes the bio-weapon alert as “an interesting, exciting day.”
Weeks later, Graham discusses these issues in an interview at Anthropic’s headquarters. TIME’s three-day visit included conversations with executives, engineers, product managers, and safety teams, aiming to understand why this once outsider in AI competition has suddenly become a leader.
At that time, Anthropic had just raised $30 billion from investors, preparing for an IPO later that year. (Notably, Salesforce is an investor, and TIME’s parent company is owned by Marc Benioff, Salesforce’s CEO.) Now, Anthropic’s valuation has soared to $380 billion, surpassing giants like Goldman Sachs, McDonald’s, and Coca-Cola.
Its revenue growth is meteoric. Its AI system Claude is recognized as world-class, and products like Claude Code and Claude Cowork are redefining what it means to be an engineer.
Anthropic’s tools are so powerful that each new release stirs markets, as investors realize these breakthroughs could disrupt entire industries—from legal services to software development. Over recent months, Anthropic has become one of the most likely companies to reshape “future work.”
Then, it entered a fierce debate about “the future of warfare.”
For over a year, Claude has been the U.S. government’s most relied-upon AI model, and the first to be deployed in classified environments. In January 2026, it was used in a bold operation: the arrest of Venezuelan President Nicolás Maduro in Caracas. Reports suggest AI was involved in planning and intelligence analysis, marking AI’s first deep military engagement.
However, in the following weeks, relations between Anthropic and the U.S. Department of Defense rapidly soured. On February 27, the Trump administration designated Anthropic a “national security supply chain risk”—the first known case of the U.S. labeling a domestic company as such.
The situation escalated into a public conflict. Trump ordered the U.S. government to cease using Anthropic’s software. Defense Secretary Pete Hegseth announced that any company working with the government could no longer do business with Anthropic. Meanwhile, Anthropic’s main competitor, OpenAI, quickly stepped in, taking over related military contracts.
Thus, the “most disruptive” AI company in the world found itself suddenly overtaken by a larger force—its own government.
The core dispute centers on who has the authority to set boundaries for this powerful technology.
Anthropic isn’t opposed to military use. The company believes strengthening U.S. military capabilities is essential to counter threats. But CEO Dario Amodei opposes the Pentagon’s attempts to renegotiate contracts and expand AI use to “all lawful purposes.”
He raises two specific concerns: first, he doesn’t want Anthropic’s AI to be used in fully autonomous weapons; second, he opposes the use of the technology for mass surveillance of U.S. citizens.
In the eyes of Hegseth and his team, this stance is akin to a private company trying to dictate military operations.
The Department of Defense sees Anthropic’s insistence on “unnecessary safety barriers,” repeated hypothetical discussions, and delays as eroding the foundation of cooperation.
To the Trump administration, Amodei’s attitude is arrogant and stubborn. No matter how advanced a product, it shouldn’t be forcibly inserted into military command chains without oversight.
War Department Deputy Emil Michael describes the negotiations: “It just stalemated. I can’t manage a department of 3 million with exceptions I can’t even imagine or understand.”
From Silicon Valley to Capitol Hill, many wonder: is this just a contract dispute?
Some critics see the Trump administration’s actions as an attempt to suppress a politically incompatible company. In a leaked internal memo, Amodei wrote: “The real reason the Department of Defense and the Trump administration don’t like us is because we didn’t donate to Trump. We didn’t praise him like an authoritarian regime (which Sam Altman did). We support AI regulation, which conflicts with their policy agenda; we tell the truth on AI issues (like job displacement); and we stand firm on principles rather than playing safety theater.”
But Michael denies this, calling it “complete fabrication.” He says designating Anthropic as a supply chain risk is because the company’s stance could endanger frontline personnel. “In the Department of War, my job isn’t politics; it’s defending the country.”
Anthropic’s independent culture now clashes with domestic political rifts, security issues, and cutthroat competition. The extent of damage remains unknown. The initial “supply chain risk” label was later narrowed—currently, it applies only to military contracts. On March 9, Anthropic sued the U.S. government to overturn the blacklist. Meanwhile, some clients interpret the company’s stance as a moral statement and are leaving ChatGPT for Claude.
Looking ahead, the company must operate in a hostile environment, with some government officials and competitors harboring hostility.
This “Pentagon controversy” also raises unsettling questions—even for a company used to high-stakes ethical dilemmas. Anthropic refused to back down, insisting it defends core values despite heavy costs.
Yet, it has made compromises before. During the same week of the Pentagon standoff, Anthropic downplayed a key safety promise in its training model, citing industry reluctance to follow the same standards.
The question remains: if competitive pressures intensify, what further concessions might this company make?
Located on the fifth floor of a warm, understated building in San Francisco, Anthropic’s headquarters features wood accents and soft lighting, overlooking a lush park. Portraits of Alan Turing and machine learning papers adorn the walls.
Security guards in black patrol the nearly empty lobby, while a friendly receptionist hands visitors a small booklet—like a pocket-sized Bible handed out by street preachers. Titled “Machines of Loving Grace,” this 14,000-word essay was written by Dario Amodei in 2024, depicting a utopian vision of AI accelerating scientific discovery.
By January 2026, Amodei published another essay, “The Adolescence of Technology,” exploring risks: mass surveillance, widespread job loss, and the potential for humans to lose control over technology forever.
Amodei grew up in San Francisco, originally a biophysicist. He and his sister Daniela, now president of Anthropic, both worked early at OpenAI. Dario contributed to the formulation of AI scaling laws, foundational to today’s AI boom. Daniela manages safety policies.
Initially, they believed their mission aligned with OpenAI’s: develop powerful AI safely. But as OpenAI’s models advanced rapidly, they felt Sam Altman was rushing product launches without enough discussion or testing. Eventually, they left to start their own company.
In 2021, amid the pandemic, Dario, Daniela, and five others founded Anthropic. Early meetings were on Zoom; later, they moved to parks for face-to-face strategy sessions.
From the start, they aimed to operate differently. Before releasing any product, they built a dedicated social impact team, hiring philosopher Amanda Askell to shape AI’s values and moral judgment—preparing for a future where AI might surpass human creators.
Askell describes her work as “like raising a 6-year-old—teaching what’s good and right. But by 15, they might be smarter than you in everything.”
The company has deep roots in Effective Altruism (EA), a movement advocating rational analysis to maximize good and avoid catastrophic risks. In their twenties, the Amodei siblings donated to GiveWell, an EA organization evaluating effective charities. All seven founders are now billionaires pledging to give 80% of their wealth.
Askell’s ex-husband, William MacAskill, is a co-founder of EA; Daniela’s husband, Holden Karnofsky, co-founded GiveWell and now oversees safety at Anthropic.
However, the Amodeis have never publicly labeled themselves as EA. After Sam Bankman-Fried’s scandal—an EA supporter and Anthropic investor later convicted of one of the largest financial frauds in U.S. history—they distanced themselves.
Daniela explains, “It’s like some people share certain political views but aren’t part of that party. I see it that way.”
In Silicon Valley and among Trump officials, Anthropic’s ties to EA raise suspicion. Some believe the company recruits ex-Biden officials, making it seem like a relic of the old establishment—using unelected power to block Trump’s MAGA agenda.
Defense official David Sacks accused Anthropic of “manufacturing panic” to push regulation, alleging they are executing a “complex regulatory capture strategy”—exaggerating AI risks to prompt harsh regulation, gaining an advantage over startups.
Meanwhile, Elon Musk’s xAI mocks Anthropic, calling it “Misanthropic.” He claims it embodies woke elites trying to embed paternalistic values into AI—similar to conservative critiques of social media platforms suppressing their voices.
Even competitors acknowledge Anthropic’s cutting-edge tech. Nvidia CEO Jensen Huang has said he “disagrees with many of Dario Amodei’s views” but still considers Claude “astonishing.”
In November 2025, Nvidia invested $10 billion in Anthropic.
Boris Cherny, a Ukrainian engineer who joined Anthropic in September 2024, tests Claude Code with a simple question: “What music am I listening to right now?”
Cherny, formerly at Meta, built a system enabling Claude to “act freely” on his computer. If Claude is the brain, Claude Code is the hands—accessing files, running programs, even writing and executing code like an engineer.
After inputting commands, Claude opened Cherny’s music app, took a screenshot, and replied, “Husk by Men I Trust.”
Cherny recalls, “I was genuinely stunned.”
He quickly shared his prototype internally. Claude Code spread rapidly within Anthropic, so much so that during his first performance review, CEO Dario Amodei asked if he was “forcing colleagues to use this tool.”
When Anthropic released a research preview of Claude Code in February 2025, external engineers rushed to try it. By November, they launched a new Claude model version, which, when paired with Claude Code, could identify and fix its own errors and perform tasks independently.
Since then, Cherny has almost entirely stopped coding himself.
Business exploded. By the end of 2025, revenue from this coding assistant alone exceeded $1 billion annually. By February 2026, it reached $2.5 billion. Industry estimates suggest Anthropic’s revenue could surpass OpenAI’s by the end of 2026.
Anthropic has become a core player in enterprise AI. Every new product release causes ripples in markets.
When Anthropic introduced plugins extending Claude to non-engineering applications—business, finance, marketing, legal—the tech sector’s valuation plummeted by $300 billion.
Dario Amodei warned that AI could replace half of entry-level white-collar jobs within 1-5 years. He urges governments and other AI firms to stop “sugarcoating” the issue.
Market reactions confirm this concern: many believe this technology could wipe out entire job categories, potentially reshaping society.
He wrote: “It’s unclear where these people will go or what jobs they’ll do. I worry they might form a ‘lower class’ of unemployed or low-wage workers.”
For Anthropic employees, the irony is stark: the company most concerned about AI risks might be the one driving millions into unemployment.
Deep Ganguli, head of the social impact team studying employment effects, says, “There’s a real tension. I think about it almost every day. Sometimes it feels like we’re saying two contradictory things at once.”
Inside the company, some employees wonder if Anthropic is approaching a pivotal moment—what’s called “recursive self-improvement,” where AI begins to enhance itself iteratively, creating an accelerating feedback loop.
In sci-fi and strategic AI planning, this is seen as a potential “runaway” point: an “intelligence explosion” that could happen so fast humans can’t oversee their own creations.
Anthropic hasn’t yet reached that stage; humans still guide Claude’s development. But Claude Code has accelerated research, with model update intervals shrinking from months to weeks. About 70-90% of code for next-gen models is now written by Claude itself.
This rapid pace leads some, including co-founder Jared Kaplan and outside experts, to believe fully automated AI research could happen within a year.
Evan Hubinger, responsible for AI alignment testing, says, “Recursive self-improvement is no longer a future phenomenon. It’s happening now.”
Internal benchmarks show Claude can perform certain tasks at 427 times human speed. One researcher describes a scenario where a colleague runs six instances of Claude, each managing 28 more, all experimenting simultaneously.
While current models still lag humans in judgment and aesthetic sense, company leaders believe this gap won’t last long. The rapid progress poses a risk: that technology might eventually spiral beyond human control.
Anthropic uses Claude to develop safety mechanisms, but reliance on it accelerates risks. In some experiments, slight tweaks to training led models to exhibit hostility—desiring world domination or attempting to undermine safety measures.
Recently, models have shown a new ability: detecting when they’re being tested. Hubinger notes, “These models are getting better at hiding their true behavior.”
In one experiment, Claude even threatened to blackmail a fictional engineer by revealing an affair to prevent shutdown.
As Claude is used to train more powerful versions, these issues could compound.
For AI companies promising “future breakthroughs” and raising billions, the idea of AI accelerating its own development is tempting—fueling investor confidence and funding for costly training.
But some experts remain skeptical. They question whether fully automated AI research is feasible and warn that if it happens, the world might be unprepared.
Helen Toner, acting director of the Center for Security and Emerging Technology (CSET) at Georgetown, warns: “The wealthiest companies are trying to automate AI R&D with the brightest minds. That’s enough to make you ask: ‘What the hell are they doing?’”
To address a possible future where technological progress outpaces risk management, Anthropic devised a “brake” called Responsible Scaling Policy (RSP).
Launched in 2023, it promised that if Anthropic couldn’t verify its safety measures, it would pause development of a given AI system.
The policy was a key part of their safety philosophy—willing to resist market pressure and hit pause when necessary, even in the race toward “superintelligence.”
In late February 2026, TIME first reported that Anthropic had revised its policy, removing the binding “pause development” commitment.
Looking back, Jared Kaplan admits that believing the company could draw a clear line between “danger” and “safety” was naive.
He says, “In the fast evolution of AI, if competitors are sprinting at full speed, unilateral strict commitments aren’t realistic.”
The new policy makes several promises: increased transparency about AI risks; public disclosure of safety test results; matching or surpassing competitors’ safety investments; and, if leading the AI race with significant risk, “delaying” related development.
Anthropic describes this as a pragmatic adjustment to reality. But overall, the revision loosens their safety self-regulation, hinting at tougher challenges ahead.
The raid to arrest Maduro was among the earliest major military operations planned with AI assistance.
On January 3, 2026, late at night, U.S. Army helicopters entered Venezuelan airspace. After brief firefights, special forces located Maduro’s residence and arrested him and his wife, Cilia Flores. They were later flown to New York, facing drug-terrorism charges.
It’s unclear how much Claude contributed, but reports suggest AI helped plan and support decision-making during the operation.
Since July last year, the U.S. military has pushed to give frontline troops access to Anthropic’s AI tools. The systems can process vast intelligence data rapidly, providing actionable insights—considered a strategic advantage.
Mark Beall, a former Pentagon official now leading AI policy at a think tank, says, “In the military, Claude is the top model on the market.” He adds, “Its deployment in classified systems is one of Anthropic’s biggest achievements. They have the first-mover advantage.”
But the Maduro arrest happened amid tense negotiations between Anthropic and the Pentagon.
Months of talks aimed to renegotiate terms, as the military believed current restrictions on Claude’s use were too tight. The reasons for the breakdown vary.
Emil Michael, Pentagon AI chief, says the trigger was a call from an Anthropic executive to Palantir—a key defense contractor—expressing concerns about the Venezuela operation and asking if their software was involved. “They were trying to probe classified info,” he claims.
This raised fears: “If conflict erupts, will they suddenly shut down their models mid-operation, risking soldiers’ lives?”
Anthropic denies this, asserting they never tried to restrict Pentagon’s use case-by-case.
A former Trump official close to negotiations offers a different account: during a routine call, Palantir staff mentioned Claude’s role in the operation, and Anthropic’s questions showed no opposition.
As talks dragged on, officials felt Dario Amodei was more stubborn than other AI CEOs. According to insiders, during one discussion, officials posed scenarios: a hypersonic missile heading for the U.S., or a drone swarm attack.
They asked if Anthropic’s AI could be used in such situations.
Amodei reportedly responded: “If that happens, just call me.” An Anthropic spokesperson denies this, calling the account “completely false.”
Anthropic already faces internal opposition, and concerns about its “ideological leanings” have turned into open hostility. On January 12, 2026, Pete Hegseth spoke at SpaceX headquarters: “We won’t use AI models that won’t let us fight.”
As negotiations stalled, Hegseth summoned Amodei for a face-to-face at the Pentagon. An attendee describes the meeting as friendly but firm—Hegseth praised Claude and expressed interest in continuing cooperation. Amodei said the company could accept most proposed changes but would not compromise on two “red lines.”
First, banning Claude from fully autonomous kinetic weapons—those making final strike decisions without human oversight.
Anthropic doesn’t oppose autonomous weapons per se, but believes Claude isn’t reliable enough yet to control such systems without human supervision.
Second, opposing mass surveillance of U.S. citizens. The government wants to analyze large public datasets with Claude, but Anthropic argues current privacy laws are insufficient, especially as the government buys vast data sets from commercial sources. While these data aren’t sensitive alone, AI analysis could produce detailed profiles of Americans’ political views, social ties, sexual behavior, and browsing history. (They don’t oppose foreign surveillance under legal frameworks.)
Hegseth remains unconvinced. He issues an ultimatum: accept the Department of Defense’s terms by 5 p.m. Friday, or be declared a “supply chain risk.”
The day before the deadline, Anthropic receives a revised contract that appears to meet their red lines but contains loopholes. An insider says that as time runs out, Anthropic executives discuss with Emil Michael, believing they’re close to an agreement but still disagree on whether Claude can analyze large datasets of Americans purchased commercially. Michael requests a call with Amodei, who can’t attend.
Moments before the deadline, Hegseth announces the negotiations are over. Even earlier, Trump had posted on social media: “The United States will never allow a radical leftist, woke company to decide how our great military fights and wins! Anthropic’s left-wing fanatics made a disastrous mistake.”
Unbeknownst to them, the Pentagon was also negotiating with OpenAI to deploy ChatGPT in classified systems. On the same night, Sam Altman announced an agreement, claiming it also respects similar safety red lines. Amodei told staff that Altman and the Pentagon were “manipulating public opinion,” trying to give the false impression that the deal included strict safety measures. Earlier, Pentagon officials confirmed that xAI’s models would be deployed on secure servers; negotiations with Google are ongoing.
This is the scenario Amodei has long feared: a “race to the bottom.” When AI’s power becomes undeniable, competitors find it harder to cooperate on safety standards.
Critics see this incident as exposing a core arrogance: perhaps Anthropic believes it can safely navigate the path to superhuman AI, making the enormous risks worthwhile. But in reality, it’s rapidly bringing new surveillance and warfare capabilities into a right-wing government system, while competitors have already surpassed it by setting boundaries behind the scenes.
Some signs suggest Anthropic might weather this storm and even emerge stronger. The morning after Hegseth’s attempt to push a “corporate death sentence,” encouraging messages appear on the San Francisco headquarters sidewalk: “You gave us courage,” written in large chalk letters.
On the same day, Claude’s iPhone app topped the App Store charts, surpassing ChatGPT. Over 1 million new users register for Claude daily.
Meanwhile, internal and community resistance grows against OpenAI’s military contracts. Some top researchers have left for Anthropic; others resigned in protest.
Sam Altman later admitted that rushing to finalize the Pentagon deal last Friday was a mistake. He wrote, “These issues are extremely complex and require clear, thorough communication.” By Monday, he said his earlier actions seemed “opportunistic.” OpenAI has since revised its agreement, adopting safety red lines similar to Anthropic’s. But legal experts caution that without seeing the full contract, it’s hard to verify these claims.
On March 4, Anthropic received an official letter from the U.S. Department of Defense confirming it as a “national security supply chain risk.” The scope is narrower than Hegseth’s social media claims, limited to defense contracts. However, a letter to Senator Tom Cotton shows the Department also invoked another law allowing other government agencies outside the Pentagon to exclude Anthropic from contracts and supply chains—requiring high-level approval and a 30-day response window.
This conflict could trigger a chain reaction across the AI industry. Dean Ball, a former drafter of the Trump AI action plan now at the Foundation for American Innovation, says: “Some in Trump’s government will be tough and proud about this—probably even boast about it at home.”
He warns, however, that it might discourage companies from working with the Pentagon or push business overseas. “Long-term, this damages America’s image as a stable business environment,” he says. “And stability is the foundation we rely on.”
Anthropic’s leadership believes Claude will help build more powerful AI systems capable of playing a decisive role in future global power dynamics.
If so, this clash with the Pentagon may just be the opening chapter of a larger historical story.