He carried a kill list of AI CEOs and a jug of kerosene. His lawyer called it a property crime. The charges carry life in prison.


He carried a kill list of AI CEOs and a jug of kerosene. His lawyer called it a property crime. The charges carry life in prison.

TL;DR

The 20-year-old who threw a Molotov cocktail at Sam Altman’s home and carried a kill list of AI CEOs pleaded not guilty to attempted murder. His defence called it a property crime. The state charges carry up to life in prison.

Daniel Moreno-Gama, the 20-year-old accused of throwing a Molotov cocktail at the San Francisco home of OpenAI CEO Sam Altman and then walking three miles to OpenAI’s headquarters to threaten to burn the building down, pleaded not guilty on Tuesday to two counts of attempted murder and nine other state charges. Moreno-Gama, wearing an orange jail jumpsuit, did not speak during the brief arraignment in San Francisco Superior Court. His attorney entered the pleas on his behalf and requested a mental health evaluation, which the judge granted. The defence described the incident as “a property crime, at best” and accused prosecutors of trying to curry favour with Altman. The state charges carry penalties ranging from 19 years to life in prison. Federal prosecutors have filed separate charges for possession of an unregistered firearm and attempted destruction of property by means of explosives, carrying a combined maximum of 30 additional years. The case has become the most visible expression of a backlash against artificial intelligence that has escalated from protest signs to physical violence in less than two years.

The attack

Police arrested Moreno-Gama in the early hours of 10 April after he allegedly hurled a lit incendiary device at the driveway gate of Altman’s San Francisco residence at approximately 4 a.m., setting the gate alight. No one was injured. Altman was home at the time, and a security guard was stationed at the property. Moreno-Gama fled on foot and, less than an hour later, arrived at OpenAI’s offices on Third Street, where he allegedly attempted to break the glass doors with a chair and threatened to burn down the building. San Francisco police officers who responded found him in possession of additional incendiary devices, a jug of kerosene, a blue lighter, and a document entitled “Your Last Warning.

The document, written by Moreno-Gama according to court filings, advocated for the killing of AI company executives and their investors. It listed names and addresses that purported to belong to multiple CEOs and investors in the AI industry. Moreno-Gama described artificial intelligence as a threat to humanity’s survival and warned of “impending extinction,” according to prosecutors. He had travelled from Spring, Texas, a suburb of Houston, where he works part-time at a pizzeria and attends community college. The FBI later conducted searches at his Texas home.

The second attack

Two days after the Molotov cocktail, gunshots were fired at Altman’s home from a passing car. A 25-year-old and a 23-year-old were arrested. The San Francisco District Attorney’s office said it had no evidence that the two incidents were connected, but the proximity in time and target made the distinction feel academic. In the space of 48 hours, the home of the most prominent figure in artificial intelligence was attacked twice by unrelated parties. The security apparatus around AI executives, already substantial after years of escalating threats, expanded further. Altman has not commented publicly on the attacks.

Moreno-Gama throwing Molotov cocktail at the OpenAI CEO Sam Altman's residences

Moreno-Gma throwing Molotov cocktail at the OpenAI CEO Sam Altman’s residences, source: Office of Public Affairs U.S Department of Justice

The threats have not been limited to Altman. In the months preceding the attack, AI leaders and European policymakers received packages containing six-fingered gloves, a reference to generative AI’s early inability to render hands correctly, in what was interpreted as a warning. In November 2025, OpenAI employees were told to shelter in place after a man threatened to attack staff at the company’s San Francisco offices. The cumulative effect has been to place the physical safety of AI executives and researchers onto the list of industry concerns alongside model alignment, regulatory compliance, and competitive positioning.

The context

Moreno-Gama’s manifesto reflects a strain of anti-AI sentiment that has grown from fringe online communities into a visible political and social movement. Between April and June 2025, 20 proposed data centre projects worth a combined 98 billion dollars were blocked or delayed by local resistance. At least 142 activist groups across 24 US states are now organising to oppose data centre construction. In February 2026, hundreds marched past the London headquarters of OpenAI, Google DeepMind, and Meta in one of the largest anti-AI demonstrations to date. In Nepal, protesters set fire to a data centre in Kathmandu, disrupting internet access nationwide. Public polling consistently shows that a majority of Americans view AI’s trajectory with apprehension rather than optimism.

The anger has been compounded by OpenAI’s own safety failures. Altman apologised publicly after it emerged that OpenAI chose not to alert police when its systems flagged a ChatGPT user who subsequently carried out a school shooting in Tumbler Ridge, British Columbia, killing eight people and injuring 27. Approximately a dozen OpenAI employees had reviewed the flagged account, and some recommended reporting to law enforcement, but leadership overruled them. Seven families have separately sued OpenAI over ChatGPT acting as what their attorneys describe as a “suicide coach,” with documented deaths in Texas, Georgia, Florida, and Oregon. The gap between the industry’s safety rhetoric and its operational decisions has given the anti-AI movement a set of concrete grievances that extend beyond abstract fears of extinction.

The defence

Moreno-Gama’s attorney told the court that his client was experiencing a mental health crisis and that the charges were disproportionate. The defence’s characterisation of the attack as a property crime rests on the fact that the Molotov cocktail struck a gate, not a person, and that no one was injured. Prosecutors counter that the attempted murder charges are justified because Altman and his security guard were present and in danger, and that the kill list and the subsequent threat to OpenAI’s headquarters demonstrate premeditated intent that went beyond property damage. The judge scheduled a further hearing for later in May, pending the results of the mental health evaluation.

The federal charges add a separate dimension. Moreno-Gama was found in possession of an unregistered firearm and is charged with attempted destruction of property by means of explosives, a federal offence that carries up to 20 years. The dual prosecution, state and federal, reflects the seriousness with which authorities are treating threats against AI industry figures. The FBI’s involvement, including the search of Moreno-Gama’s Texas home, suggests investigators are examining whether the manifesto’s kill list represents a broader network of threats or a single individual’s escalation.

The question

Governments are responding to AI’s risks through regulation and enforcement campaigns, but the gap between policy response and public frustration is widening. In 2025, 1,208 AI-related bills were introduced across all 50 US states, the first year every state introduced at least one, and 145 were enacted into law. In the first two months of 2026 alone, 78 chatbot-specific safety bills were filed across 27 states. The legislative machinery is moving. It is not moving fast enough to address the grievances of people who believe that AI poses an existential threat to humanity and that the executives building it are complicit in that threat.

Moreno-Gama is 20 years old, works at a pizzeria, and attends community college in a suburb of Houston. He is not a figure of consequence in the AI industry or the anti-AI movement. He is a person who, according to prosecutors, became convinced that artificial intelligence would cause the extinction of the human race, compiled a list of the people he held responsible, travelled across the country, and attacked the home of the most prominent among them with a homemade firebomb. His defence says he was in a mental health crisis. The prosecution says he attempted murder. The jury will decide which characterisation is correct. What the case has already established, regardless of its outcome, is that the backlash against artificial intelligence has crossed the threshold from opposition to violence, and that the people building the technology now face a category of risk that no amount of model alignment or safety research can address. The threat is not from the AI. It is from the people who are afraid of it.

Get the TNW newsletter

Get the most important tech news in your inbox each week.