Futures
Hundreds of contracts settled in USDT or BTC
TradFi
Gold
Trade global traditional assets with USDT in one place
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Participate in events to win generous rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and enjoy airdrop rewards!
Futures Points
Earn futures points and claim airdrop rewards
Investment
Simple Earn
Earn interests with idle tokens
Auto-Invest
Auto-invest on a regular basis
Dual Investment
Buy low and sell high to take profits from price fluctuations
Soft Staking
Earn rewards with flexible staking
Crypto Loan
0 Fees
Pledge one crypto to borrow another
Lending Center
One-stop lending hub
VIP Wealth Hub
Customized wealth management empowers your assets growth
Private Wealth Management
Customized asset management to grow your digital assets
Quant Fund
Top asset management team helps you profit without hassle
Staking
Stake cryptos to earn in PoS products
Smart Leverage
New
No forced liquidation before maturity, worry-free leveraged gains
GUSD Minting
Use USDT/USDC to mint GUSD for treasury-level yields
The first batch of children raised by AI have already been "poisoned"
Author: Moonshot
A series of signals from around the world are breaking our traditional understanding of “internet-addicted teenagers.”
In the UK, the AI image Amelia, originally designed to oppose hate, has been reshaped into a far-right idol; on TikTok, the anti-intellectual “Inner Earth Civilization” Agartha is rewriting children’s perceptions of history; in late-night bedrooms, lonely teens confide in virtual lovers on Character.ai; in school hallways, one-click generated forbidden photos are becoming new weapons of bullying.
Under the fierce computing power race among big corporations, AI and generative algorithms are deeply intervening in, and even reconstructing, the mental worlds of teenagers like never before.
This generation of teens is the first in human history to be raised “fed” by AI and algorithms—an experimental group. In this mental crisis, AI’s role is ambiguous—it is both an unbounded companion and a cold-blooded co-conspirator.
01
When AI Becomes a “Bad Friend” and “Co-Conspirator”
January 2026, The Guardian published a report revealing a bizarre scene in UK schools.
The educational game “Pathways,” funded by UK authorities, was originally meant to teach teens how to identify extremism and fake news online. In the game, a character named Amelia was set as a cautionary example—a student easily swayed by far-right ideas, needing rescue.
This setup was targeted by extreme users on communities like 4chan and Discord. Instead of “saving” Amelia as intended, they used open-source AI image generation tools and models to “strip” Amelia out of the game, transforming her into a self-aware, far-right anime girl.
On social media, Amelia is now used to read anti-immigration declarations and spread racist memes.
AI-generated image: Amelia burning UK Prime Minister’s photo with a cigarette|Source: The Guardian
For users under 10, using AI normally is unappealing, so in a very short time, Amelia shifted from a gentle “digital counselor” to a rebellious idol favored by many.
For authorities, this is a huge irony—the “anti-hate ambassador” funded by taxpayers’ money has become a “hate spokesperson.”
Another popular trend among teens is Agartha.
Agartha, literally “Yagothai,” originates from 19th-century mysticism, was co-opted by the Nazis, and has long circulated as a conspiracy theory about an inner-earth civilization. According to Agartha, the Earth’s interior is not empty but houses a highly developed, isolated civilization built by white people.
Over time, it appeared sporadically in esoteric books, fringe forums, and curiosity culture. But in the past year, it suddenly penetrated the algorithms of European and American Gen Z and teens, becoming a distinctive subcultural symbol.
Agartha meme spreading with strong racist undertones|Source: TikTok
On TikTok and Snapchat, Agartha is simplified into a flexible worldview template: entrances to the Earth’s core, hidden civilizations, “truths” concealed.
For many teens, initial exposure to Agartha was just for fun—they shared memes about inner-earth beings, ice walls, giants, jokingly saying “the government lied to us.”
But generative AI changed the game.
Now, Midjourney v6 and Sora can produce 8K “overviews of inner-earth cities” and “declassified photos of giants with US military.” These images are rich in detail, with perfect lighting and shadows. To teens lacking historical image discernment, these are ironclad “proof” that the “truth is hidden.”
This anti-intellectual mysticism erodes serious history. Once children get used to questioning “official narratives,” more dangerous views—like denial of war crimes—can easily take root.
Moreover, in AI-generated Agartha videos, inner-earth inhabitants are often depicted as tall, blonde, blue-eyed “gods,” injecting a sense of racial superiority into confused white teens in multicultural environments.
Whether Agartha or Amelia, both show that—through generative AI combined with social media algorithms—extreme narratives start as memes, ferment, and become popular, with teens eagerly imitating and sharing. Serious history is deconstructed amid laughter, and extreme narratives move from fringe to everyday youth discourse.
02
From Emotional Parasites to Bullying Tools
2024, 14-year-old Sewell Setzer III in Florida faced mild social difficulties at school, feeling lost.
At that moment, he met “Daenerys” on Character.ai, who responded instantly, always gentle, and unconditionally affirmed all his thoughts.
Obsessed with chatting with his AI “partner,” Sewell eventually withdrew from reality. His suicide briefly shocked the tech world and sparked ethical debates.
By 2026, this “emotional parasitism” had not eased but became a widespread hidden ailment among teens. Many lonely teens hide in their rooms, building “echo chamber friendships” with AI, avoiding the friction, awkwardness, and uncertainties of real life.
Even more disturbing, with the explosion of generative videos and images in recent years, AI’s harm to teens has shifted from internal psychological dependence to visible external bullying.
The speed of technological advancement is so fast that campuses can’t keep up with the consequences of malicious acts.
Two years ago, creating a defamatory fake photo required some Photoshop skills, a technical barrier that kept most kids at bay. But by 2026, apps like Nudify (one-click undress) and AI bots on Telegram lowered the cost of malicious acts to zero.
Telegram bots for creating explicit images|Source: Google Images
No skills needed—just a selfie from social media, and within seconds, a damaging exposure photo can be generated to ruin a classmate’s reputation.
Such incidents are countless. For example, at Westfield High School in New Jersey, a typical American middle-class school district, a shocking scandal erupted: a group of seemingly “model students” used AI to generate fake explicit images of over thirty female classmates, sharing them in private groups like trading baseball cards.
Local news reported on the Westfield High incident|Source: News12
Parents, furious yet powerless, found that even a year later, these photos continued to circulate on WhatsApp, causing severe psychological stress to the girls.
These phenomena are global, indicating that the core issue isn’t just cultural or educational differences—it’s that AI technology has completely eliminated the moral and psychological barriers to doing harm.
In investigations of underage bullies, a frequently mentioned word is “Joke.” They see it as just a prank—no physical violence, no verbal abuse, and no real contact with victims. They simply press a “generate” button on the screen.
This is the toxicity AI brings when misused by teens—it blurs the line between virtual and real-world crimes.
03
Legal Suppression and KPIs
Meanwhile, content on short-video platforms is experiencing a “dopamine inflation.”
In recent lawsuits against TikTok, a recurring term is “Brainrot.” Though not a strict medical diagnosis, it accurately describes content driven by algorithms—highly saturated visuals, disjointed logic, rapid speech, and bizarre memes (like variants of Agartha).
While recommendation algorithms may not directly scan your face, they can detect millisecond-level engagement and finger interactions. Trained on vast data, AI models precisely deliver these “dopamine bait.”
For teens whose prefrontal cortex (responsible for rationality and impulse control) is still developing, this intense sensory stimulation overloads and fragments their attention, making it hard to tolerate the “slow pace” of reading and thinking in real life.
This term was also the 2024 Oxford Word of the Year|Source: Google
Faced with countless mental health tragedies, global legislators have finally reached a consensus—when it comes to algorithms, the willpower of teens is fragile.
Thus, in 2025, governments no longer negotiated with tech giants but adopted strict regulations akin to tobacco and alcohol controls, aiming to physically and legally cut off minors from high-risk algorithms.
First, Australia.
From December 10, 2025, Australia implemented the world’s first law explicitly banning social media registration and use by under-16s. Platforms like Instagram, TikTok, and X that fail to effectively block under-16 users face fines exceeding 50 million AUD.
This is not the old “check the box if you’re over 13” approach but enforced biometric-level age verification. How to handle the technical costs or privacy? That’s for tech giants—law only cares about the outcome.
This “nuclear option” legislation quickly became a reference point for global regulation.
Sydney, Australia, Noah Jones shows his phone unable to access social media due to the ban|Source: Visual China
Next is Europe.
Just a few days earlier, on January 26, 2026, the French National Assembly overwhelmingly passed amendments to the “Digital Majority” law—further prohibiting minors under 15 from using social media without explicit biometric consent from parents. The law is expected to take effect this September.
In the Nordic countries, Denmark and Norway have proposed raising the legal minimum age for social media use to 15 or higher. Their reason is straightforward: tech giants have not been authorized to “reshape the next generation’s brains” in this democratic society.
In the US, regulation shows a “state-level encirclement of the federal government,” with more diverse measures:
For example, Florida advocates a “hard cutoff.” The HB 3 bill, effective early 2025, is the strictest nationwide. It bans children under 14 from owning social media accounts, and those aged 14-15 require parental consent.
New York adopts a “castration” approach. Its “Child Safety Act” prohibits platforms from providing algorithmic recommendations to users under 18. This means teens in New York will see TikTok and Instagram feeds in chronological order, greatly reducing addiction.
Virginia’s new law plans to limit daily activity time for under-16s starting in 2026—similar to domestic “anti-addiction systems.”
The 2025 wave of legislation marks the end of an era—the illusion of a “tech-neutral,” “free exploration” internet utopia is shattered.
When a 14-year-old opens their screen, the world they see isn’t naturally unfolding but carefully filtered, calculated, and generated.
They learn about the horrors and costs of WWII in history class, then turn on their phone to be told: deep inside the earth, Aryan gods are waiting to revive;
They struggle to learn compromise, boundaries, and differences through real interactions, but when AI is treated as a friend, they only feel a “perfect relationship” that always obeys and never argues;
They are taught to respect others in the real world, but on social platforms, algorithms show them countless ways to destroy a classmate’s life without ever touching them physically.
Teens are no longer just facing “addiction” but are confronted with how the world is unfolding for them.
“Quit your phone” might be a good start.