Claude account exposed in large-scale fraud and unauthorized charges! Victims in Taiwan and Canada lose tens of thousands, follow these three steps to protect yourself immediately

ChainNewsAbmedia

Recently, multiple Claude AI users have issued warnings in Facebook groups and Reddit posts, stating that the credit cards linked to their Anthropic accounts are being fraudulently charged frequently. Attackers carry out large-scale purchases through the platform’s “Gift Subscriptions (Gift)” feature. Several victims from Taiwan, Canada, and the United States reported losses of more than ten thousand TWD, drawing outside attention.

Google’s malicious extension has been lurking for three years, secretly bypassing passwords and two-factor authentication

A Taiwanese victim, Mr. Hong, posted in a Facebook group for Claude Taiwan, revealing that the root cause of the incident stemmed from his downloading of software in April 2023. During that process, without knowing it, he installed a malicious Chrome extension called “Start New Tab Search.” The program belongs to the Adware.NewTab family and has been lurking for as long as three years.

This extension has permission to intercept HTTP requests, continuously stealing users’ cookies and session tokens in the background. Once the attacker obtains a valid session token, they do not need the account password or to complete two-factor authentication (2FA) at all; they can directly make purchases using the user’s account. This is also why all the measures taken by the victims afterward—pausing the card, changing passwords, enabling 2FA, and so on—failed to stop the fraud.

Four charges within three days, changing cards didn’t help—Anthropic interface flaw exposed

Mr. Hong said that in the early hours of April 16, he found his account had been automatically charged to purchase the “Gift Max 5X” plan. Even though he immediately took all standard security measures—pausing the card, changing his password, enabling two-factor authentication, logging out of all devices, revoking API Keys, and switching to a new payment method—the fraudulent charges continued occurring until April 20.

In the end, Mr. Hong was successfully charged for four transactions, with losses totaling $400. During that time, his phone continued to receive Mastercard 3D verification messages and Stripe verification codes, indicating that the attackers kept trying to charge again with the new card.

He worries that Anthropic’s billing interface does not have a “remove credit card” option—only “update payment method (Update)”—making it impossible for users to detach the card from their account.

Victims at home and abroad speak up simultaneously, and Reddit also shares card-fraud cases

Notably, another Canadian user also posted on Reddit’s r/ClaudeAI forum, saying their account was used to purchase a “Gift Max 20x” gift subscription with a credit card, resulting in losses of about 950 CAD (about $700 USD). Multiple charges were also continuously made.

He pointed out that on the consumer review site Trustpilot, multiple users from the Netherlands, the UK, and the US reported similar cases.

Anthropic customer support is essentially useless—contacting the credit card company is the fastest way for users to self-rescue

Both victims faced the same dilemma: Anthropic’s general support at support@anthropic.com could hardly provide any timely assistance. After Mr. Hong reported the issue on April 18, he later sent four more explanation emails, but within 72 hours there was still no response from any real person—only automated replies from a Fin AI Agent. The Canadian user also said the support from Fin AI was extremely poor.

At present, both have turned to their credit card companies to file a dispute chargeback (chargeback), which has become the quickest self-rescue method available to the victims right now. Mr. Hong also suggested that if you want to contact the Anthropic team, you can send emails to both usersafety@anthropic.com and disclosure@anthropic.com at the same time, which may offer a better chance of receiving a more direct response.

How to protect yourself? Three steps to immediately check your Claude account

In response to this ongoing spreading attack, victims are calling on all Claude users to immediately take the following protective measures.

First, log in to claude.ai right away, go to “Settings → Billing → Invoices,” and check whether there are any unauthorized “Gift Max” related charge records. If you find any, immediately contact the issuing bank to file a dispute chargeback—do not wait for Anthropic customer support to respond.

Next, open Chrome’s extensions management page (chrome://extensions/), carefully review all installed extensions, and remove any that you don’t recognize, that are from suspicious developers, or that you don’t remember installing yourself. These malicious programs often disguise themselves under names like “enhancing or beautifying the interface.”

Finally, submit an official support ticket to Anthropic, and at the same time email both usersafety@anthropic.com and disclosure@anthropic.com to improve your chances of getting a real person to handle it.

The victims also hope that Anthropic can quickly strengthen the platform’s protection measures, including enabling users to truly remove payment methods, adding second-factor verification for Gift transactions made in a short time window, and automatically freezing accounts after users report scams.

This article: Claude account exposed massive-scale card fraud! Taiwan and Canada victims lose over ten thousand—three steps to protect yourself immediately. First appeared on Chain News ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

Anthropic CEO Heads to the White House for a Break-the-Ice Meeting: Meets with the Chief of Staff and Bessent to Discuss Mythos

The Wall Street Journal reports that Anthropic CEO Amodei met privately with the White House on 4/17, focusing on Mythos’s national security boundaries and responsible deployment; the White House says the meeting was constructive, and the market views it as a thawing in relations. The main point of contention is that the military wants Claude for all lawful purposes, while Anthropic insists on discretion under its own acceptable-use policy. Both sides say they will continue the dialogue and discuss again before Mythos goes live in May.

ChainNewsAbmedia31m ago

Google Ironwood TPU: 10x performance + four partners taking on Nvidia

According to Bloomberg’s in-depth reporting and Google’s official announcements, on April 22 Google officially expanded its lineup of in-house AI chips: it began full availability of Ironwood (the seventh-generation TPU) dedicated to inference on Google Cloud, and simultaneously launched next-generation design collaborations with four partners—Broadcom, MediaTek, Marvell, and Intel. The goal is to use a customized chip supply chain to directly challenge Nvidia’s leading position in the AI compute market. Ironwood: Seventh-generation TPU, first inference-dedicated by design Ironwood is Google’s seventh-generation product in the TPU series, and it is the first inference-dedicated chip under the strategy of “splitting training and inference.” The specifications Google disclosed: peak performance per chip is T

ChainNewsAbmedia32m ago

DeepSeek discusses its first round of external funding, valuation at $20 billion: China’s AI valuation hits a new high

According to a Bloomberg report on April 22 (citing The Information’s exclusive), Chinese AI startup DeepSeek is in talks for its first round of external fundraising, valuing the company at $20 billion. This is DeepSeek’s first time raising money from the outside since it was founded in 2023; previously, it was fully funded internally by the quant hedge fund High-Flyer Capital Management. The $20 billion valuation is also a milestone for Chinese AI startups, marking their first entry into the latter half of the “$10 billion-plus valuation” tier. Fundraising size and intended use of funds DeepSeek is seeking at least $300 million in its first round of financing. The $20 billion valuation doubles the “valuation above $10 billion” first disclosed by The Information on April 17 earlier

ChainNewsAbmedia34m ago

Google Launches AI Agent Tools to Help Enterprises Automate Tasks

Google reveals tools for building AI agents to automate tasks, track progress, and manage workflows via dedicated agent inboxes, with Workspace updates and a vision of AI agents reshaping daily employee routines. Abstract: Google unveiled tools to create AI agents for task automation, monitor their progress, and streamline workflows, signaling Workspace updates and a future where AI agents transform daily work.

GateNews47m ago

Google: 75% of New Code at Google Generated by AI

Google reports 75% of new code generated by AI, and more than half of ML compute investments aimed at cloud business operations. Abstract: In a corporate update, Google states that AI now generates about 75% of new code and that the majority of its machine learning compute investments will be directed toward cloud-based business operations.

GateNews1h ago

Google Cloud Launches TPU8T and TPU8I Chips for Artificial Intelligence

Gate News message, April 22 — Google Cloud has launched new TPU8T and TPU8I chips designed for artificial intelligence applications.

GateNews1h ago
Comment
0/400
No comments