Recently, multiple Claude AI users have issued warnings in Facebook groups and Reddit posts, stating that the credit cards linked to their Anthropic accounts are being fraudulently charged frequently. Attackers carry out large-scale purchases through the platform’s “Gift Subscriptions (Gift)” feature. Several victims from Taiwan, Canada, and the United States reported losses of more than ten thousand TWD, drawing outside attention.
Google’s malicious extension has been lurking for three years, secretly bypassing passwords and two-factor authentication
A Taiwanese victim, Mr. Hong, posted in a Facebook group for Claude Taiwan, revealing that the root cause of the incident stemmed from his downloading of software in April 2023. During that process, without knowing it, he installed a malicious Chrome extension called “Start New Tab Search.” The program belongs to the Adware.NewTab family and has been lurking for as long as three years.
This extension has permission to intercept HTTP requests, continuously stealing users’ cookies and session tokens in the background. Once the attacker obtains a valid session token, they do not need the account password or to complete two-factor authentication (2FA) at all; they can directly make purchases using the user’s account. This is also why all the measures taken by the victims afterward—pausing the card, changing passwords, enabling 2FA, and so on—failed to stop the fraud.
Four charges within three days, changing cards didn’t help—Anthropic interface flaw exposed
Mr. Hong said that in the early hours of April 16, he found his account had been automatically charged to purchase the “Gift Max 5X” plan. Even though he immediately took all standard security measures—pausing the card, changing his password, enabling two-factor authentication, logging out of all devices, revoking API Keys, and switching to a new payment method—the fraudulent charges continued occurring until April 20.
In the end, Mr. Hong was successfully charged for four transactions, with losses totaling $400. During that time, his phone continued to receive Mastercard 3D verification messages and Stripe verification codes, indicating that the attackers kept trying to charge again with the new card.
He worries that Anthropic’s billing interface does not have a “remove credit card” option—only “update payment method (Update)”—making it impossible for users to detach the card from their account.
Victims at home and abroad speak up simultaneously, and Reddit also shares card-fraud cases
Notably, another Canadian user also posted on Reddit’s r/ClaudeAI forum, saying their account was used to purchase a “Gift Max 20x” gift subscription with a credit card, resulting in losses of about 950 CAD (about $700 USD). Multiple charges were also continuously made.
He pointed out that on the consumer review site Trustpilot, multiple users from the Netherlands, the UK, and the US reported similar cases.
Anthropic customer support is essentially useless—contacting the credit card company is the fastest way for users to self-rescue
Both victims faced the same dilemma: Anthropic’s general support at support@anthropic.com could hardly provide any timely assistance. After Mr. Hong reported the issue on April 18, he later sent four more explanation emails, but within 72 hours there was still no response from any real person—only automated replies from a Fin AI Agent. The Canadian user also said the support from Fin AI was extremely poor.
At present, both have turned to their credit card companies to file a dispute chargeback (chargeback), which has become the quickest self-rescue method available to the victims right now. Mr. Hong also suggested that if you want to contact the Anthropic team, you can send emails to both usersafety@anthropic.com and disclosure@anthropic.com at the same time, which may offer a better chance of receiving a more direct response.
How to protect yourself? Three steps to immediately check your Claude account
In response to this ongoing spreading attack, victims are calling on all Claude users to immediately take the following protective measures.
First, log in to claude.ai right away, go to “Settings → Billing → Invoices,” and check whether there are any unauthorized “Gift Max” related charge records. If you find any, immediately contact the issuing bank to file a dispute chargeback—do not wait for Anthropic customer support to respond.
Next, open Chrome’s extensions management page (chrome://extensions/), carefully review all installed extensions, and remove any that you don’t recognize, that are from suspicious developers, or that you don’t remember installing yourself. These malicious programs often disguise themselves under names like “enhancing or beautifying the interface.”
Finally, submit an official support ticket to Anthropic, and at the same time email both usersafety@anthropic.com and disclosure@anthropic.com to improve your chances of getting a real person to handle it.
The victims also hope that Anthropic can quickly strengthen the platform’s protection measures, including enabling users to truly remove payment methods, adding second-factor verification for Gift transactions made in a short time window, and automatically freezing accounts after users report scams.
This article: Claude account exposed massive-scale card fraud! Taiwan and Canada victims lose over ten thousand—three steps to protect yourself immediately. First appeared on Chain News ABMedia.
Related Articles
Anthropic CEO Heads to the White House for a Break-the-Ice Meeting: Meets with the Chief of Staff and Bessent to Discuss Mythos
Google Ironwood TPU: 10x performance + four partners taking on Nvidia
DeepSeek discusses its first round of external funding, valuation at $20 billion: China’s AI valuation hits a new high
Google Launches AI Agent Tools to Help Enterprises Automate Tasks
Google Cloud Launches TPU8T and TPU8I Chips for Artificial Intelligence