In short: A former Meta engineer in London is under criminal investigation after allegedly building a program to extract around 30,000 private Facebook photos while bypassing the platform’s security checks, the latest in a series of privacy and security failures to emerge from the company over the past four years.
Meta’s internal security systems are designed to prevent precisely this kind of abuse: unauthorised access to user data by the people who built the platform. According to the Metropolitan Police and the Press Association, they did not prevent it here. A man in his 30s, a former Meta engineer who lives in London, was arrested in November 2025 on suspicion of unauthorised access to computer material under the Computer Misuse Act. He has since been released on bail and must next report to police in May 2026. The case, which came to light this week, is now being handled by the Metropolitan Police’s specialist Cybercrime Unit following a referral from the FBI.
How the breach allegedly worked
The engineer is believed to have written a program that could pull private images from Facebook accounts while evading the security checks Meta uses to flag suspicious internal access. The result, according to investigators, was the extraction of around 30,000 photographs belonging to users who had not made those images public. Meta told the BBC that the breach was discovered more than a year ago, placing the discovery before April 2025, after which the company said it immediately dismissed the employee and referred the matter to law enforcement.
The mechanics of how the program avoided detection have not been disclosed publicly by either Meta or the Metropolitan Police. What is clear from the timeline is that a period of several months elapsed between the discovery of the breach and the arrest in November 2025, consistent with a cross-jurisdictional investigation that involved the FBI before the referral reached UK law enforcement. Meta said it has since notified the Facebook users whose images were downloaded and upgraded its security systems to address the vulnerability.
A company with a long record of security failures
The investigation adds to a catalogue of privacy and security problems that have followed Meta for years, and which regulators have consistently found serious enough to warrant substantial financial penalties. Meta poured tens of billions into AI infrastructure expansion throughout 2025, but that investment has not insulated it from an equally significant accumulation of regulatory liability.
In November 2022, the Irish Data Protection Commission, which serves as Meta’s lead GDPR regulator in the European Union — fined the company €265m after an investigation into data scraping that exposed the personal details of up to 533 million Facebook users. The data, including names, phone numbers, and email addresses, had appeared on an online hacking forum in April 2021. The DPC found that Meta had failed to implement data protection by design and by default as required under Articles 25(1) and 25(2) of the GDPR.
Two years later, in September 2024, the same regulator returned with a further fine of €91m after finding that Meta had inadvertently stored the passwords of approximately 600 million Facebook and Instagram users in plaintext on its internal systems, without encryption or any cryptographic protection. The passwords were never exposed to external parties, but the failure to secure them internally violated multiple provisions of the GDPR, including the basic requirement to implement appropriate technical security measures. The two fines together amount to €356m in penalties from a single European regulator over a three-year period.
Meta has also faced escalating legal pressure over the design of its platforms. In March 2026, a Los Angeles jury found Meta and Google negligent in a landmark social media safety case, concluding that Instagram and YouTube had been designed in ways that were dangerous to younger users, that the companies were aware of those risks, and that they failed to warn users of the harm. The plaintiff, a now 20-year-old woman known publicly as Kaley, was awarded $6m in damages, split between $3m in compensatory damages and $3m in punitive damages, with Meta bearing 70% of the total liability. Both companies said they disagreed with the verdict and plan to appeal.
The insider threat problem
The London investigation illustrates a category of risk that large technology platforms find particularly difficult to manage: the trusted insider. External breaches, in which attackers probe systems from outside the organisation, can be defended against through firewalls, rate limiting, and anomaly detection. The challenge with insider threats is that the person doing the probing has legitimate access to the systems they are abusing, and may understand precisely which monitoring systems to circumvent.
Meta’s claim to have detected the breach and acted swiftly, firing the employee and making a law enforcement referral, suggests its internal controls eventually flagged the unusual activity, even if they did not prevent it. What remains unanswered is how long the extraction programme operated before it was detected, and how 30,000 photographs were able to leave the platform without triggering an earlier alert. Those questions will presumably form part of the Metropolitan Police’s investigation, alongside any criminal charges the Crown Prosecution Service decides to pursue once the bail period concludes.
For the Facebook users whose private images were taken, the notification from Meta will have been little comfort. The photographs were, by definition, ones those users had chosen not to make public. Whether they were personal, intimate, or simply private is unknown. What is known is that they are now in circulation outside the platform, and that the person allegedly responsible for taking them was employed by the company those users trusted to protect them.
Get the TNW newsletter
Get the most important tech news in your inbox each week.
