In short: Meta faces a convergence of lawsuits across the US, Australia, and the UK alleging the company knowingly profited from scam ads on Facebook and Instagram, with its own internal documents projecting that 10% of 2024 revenue, roughly $16 billion, came from fraudulent advertising. The cases span a $500 million pump-and-dump scheme, deepfake celebrity endorsements, financial professional impersonation, and cryptocurrency fraud, while leaked internal assessments show Meta calculated that scam revenue would exceed the cost of any regulatory settlement.
Meta is facing a convergence of lawsuits, class actions, and regulatory investigations over scam advertisements on Facebook and Instagram that, according to the company’s own internal projections, generated roughly $16 billion in revenue in 2024, approximately 10% of Meta’s total advertising income. The legal actions span the United States, Australia, and the United Kingdom, and collectively allege that Meta knowingly profited from fraudulent ads including AI-generated deepfake celebrity endorsements, pump-and-dump stock schemes, fake investment platforms, and unauthorised impersonation of financial professionals, while maintaining ad moderation systems that were structurally inadequate to prevent the fraud and, in some cases, deliberately weakened to protect revenue.
The most financially significant case, filed in February in the US District Court for the Northern District of California, alleges that Meta facilitated a pump-and-dump scheme involving Jayud Global Logistics, a Chinese stock listed on Nasdaq. According to the complaint, scammers acquired 50 million shares at discounted prices in December 2024, then used targeted Facebook and Instagram ads to drive the share price to nearly $8 before dumping their positions in April 2025. Consumer losses exceeded $500 million. A California federal judge dismissed the class action on 25 March, ruling that the plaintiffs had not sufficiently alleged Meta “co-created” the ads, though the dismissal appeared to be without prejudice.
The pattern across jurisdictions
A separate class action, filed by Scott+Scott on behalf of financial professionals John Suddeth and Sara Perkins, alleges that Meta allowed scammers to use their names, images, voices, and professional personas in paid advertisements, causing client diversion, reputational harm, and regulatory inquiries. A bipartisan coalition of US state attorneys general had warned Meta in June 2025 that impersonation ads and fraudulent WhatsApp investment groups were being used for widespread fraud. According to the complaint, materially identical impersonation ads continued running after the warning.
In December, the US Virgin Islands attorney general sued Meta in Superior Court, alleging the company “knowingly profited” from scam ads and “charged fraudsters extra for the right to advertise scams” rather than removing them. The Virgin Islands suit joined actions by 42 other state attorneys general who have taken Meta to court, primarily over child safety but with increasing overlap with advertising fraud. In April, New York attorney general Letitia James issued an investor alert specifically about investment scams on Meta platforms.
In Australia, the Competition and Consumer Commission has been pursuing Meta in federal court since March 2022 over cryptocurrency scam ads that used the likenesses of businessman Dick Smith, television presenter David Koch, and former New South Wales premier Mike Baird. A single victim cited in the complaint lost more than A$650,000. Meta failed to get the case dismissed in 2023. In the United Kingdom, the Financial Conduct Authority found 1,052 illegal financial advertisements on Meta platforms in a single week in November 2025. A leading UK bank found that 80% of its fraud cases originated on Meta’s platforms, with Facebook Marketplace accounting for 60% of purchase fraud, Instagram responsible for 67% of investment fraud, and WhatsApp impersonation scams up 300% year on year. Meta’s platforms account for 61% of all authorised push payment scams in the UK, according to UK Finance, with criminals stealing £485.2 million.
The $16 billion question
The scale of the problem is defined by Meta’s own internal documents. A Reuters investigation published in November 2025 revealed that Meta projected 10% of its 2024 global revenue, roughly $16 billion, derived from scam and fraud-related advertising. The company served an estimated 15 billion “higher risk” scam ads per day. Nineteen percent of Meta’s ad revenue from China, approximately $3 billion, was linked to scams. Internal documents showed that when enforcement staff proposed shutting down fraudulent accounts, Meta sought assurance that “growth teams would not object given the revenue impact.” A subsequent Reuters report in January found that Meta had developed an internal “playbook” to neutralise regulators and manipulated its ad library to make scam ads harder to find.
Rob Leathern, Meta’s former senior director of product management who led business integrity operations, said of the findings: “The levels that you’re talking about are not defensible.” Meta described the Reuters projections as “a rough and overly-inclusive estimate” and said the documents presented “a selective view that distorts Meta’s approach to fraud and scams.”
The economics are straightforward. Implementing universal advertiser verification would cost Meta approximately $2 billion and reduce revenue by up to 4.8%. Internal assessments reportedly noted that “revenue from risky ads would almost certainly exceed the cost of any regulatory settlement,” a calculation that treats fines as a cost of doing business rather than a deterrent.
The deepfake dimension
AI-generated deepfakes have become central to the scam ad ecosystem. Deepfake fraud attempts have surged by 3,000% as generative AI tools have become cheaper and more accessible, enabling scammers to create convincing fake video endorsements at scale. Martin Lewis, the UK’s most prominent personal finance campaigner, was targeted with a deepfake video promoting a “Quantum AI” investment scheme. Deepfakes of Donald Trump, Elon Musk, Alexandria Ocasio-Cortez, and Bernie Sanders were used to promote fake government benefit schemes. In Brazil, AI-altered images and voices of prominent physicians promoted fraudulent healthcare products.
The Tech Transparency Project identified 63 scam advertisers responsible for more than 150,600 political ads and $49 million in lifetime spending on Meta. During a 90-day period in mid-2025, at least 45 scam advertisers spent over $18 million. Meta says it protects images of 500,000 celebrities and public figures through automated detection and is testing facial recognition technology to compare faces in suspected scam ads against public figures’ profile pictures. EU lawmakers have agreed to ban AI-generated non-consensual deepfakes through amendments to the AI Act, signalling increasing regulatory appetite to legislate against synthetic media that platforms have failed to police.
What Meta says it is doing
Meta recently rolled out new scam detection tools across Facebook, Instagram, WhatsApp, and Messenger. The company says it removed 159 million scam ads and took down 10.9 million accounts linked to scam operations in 2025, with 92% of scam ads caught proactively before any user report. It disabled 150,000 accounts associated with Southeast Asian scam centre networks and partnered with the Royal Thai Police in disruption operations that led to 21 arrests. Meta is targeting 90% of ad revenue from verified advertisers by the end of 2026, up from 70%. In February, it filed its own lawsuits against scam advertisers in Brazil, China, and Vietnam, and sent cease-and-desist letters to eight former Meta Business Partners offering “un-ban” services to fraudulent advertisers.
The gap between Meta’s enforcement claims and the data in its own internal documents is the through line connecting every lawsuit. The company says it catches 92% of scam ads proactively. Its own projections estimated $16 billion in scam-related revenue in a single year. It removed 159 million scam ads. It served 15 billion higher-risk ads per day. It is investing in facial recognition to detect deepfakes. Its internal assessments concluded that scam revenue would exceed the cost of any regulatory settlement. The numbers do not cohere into a story of a company that failed to notice the problem. They describe a company that noticed the problem, measured it, calculated the cost of fixing it against the cost of not fixing it, and chose the option that preserved revenue. The lawsuits are, in that sense, not about whether Meta knew. They are about what it did with what it knew.
Get the TNW newsletter
Get the most important tech news in your inbox each week.
